Wearable Tech & Smart Cities
Strategies for plugging Human-Powered Transportation (HPT) into the IoT
As the most vulnerable component of a multi-mode transportation system, HPT (Human Powered Transportation) must be included in the planning and execution of municipal infrastructure improvements if the safety-related goals of the Smart Cities movement are to be realized. The needs of the HPT community are fundamentally different than that of other transportation modes, and consequences of a mishap far more dire.
With the onset of several ubiquitous and undeniable trends, the Smart Cities movement has spread to a number of related initiatives (Smart Growth America, People For Bikes, Vision Zero, HAAS Alert) and taken on a life (and urgency) of its own. This document explores the
current state and future promise of plugging what will eventually be the last vestiges of human-controlled vehicles on the roadways, into the smart cities infrastructure. In this context, Human Powered Transportation (HPT) will be the unpredictable wild card that is blind, deaf and mute among the legions of mechanized actors with near perfect situational awareness. So essentially the mechanized elements aided by advanced driver-assistance systems (ADAS) in the short term and that will be fully connected and autonomous (CAV) in the long term need to accommodate the HPT and not the other way around. Technically speaking, for all participants to realize the safety benefits, what’s required is two basic
components: 1) connectivity of the vehicle/HPT to the Internet and 2) the vehicle/HPT’s ability to sense and interact with its surrounding environment. We envision wearable tech evolving in kind so that HPT can continue to thrive in the coming era of autonomous vehicles and Smart Cities.
A Smart City and Its Future
In very general terms, a smart city can optimize efficiency and improve planning and safety by being data-driven. Just as with business intelligence, volumes of operational data (and of course the data scientists that make sense of it all) are becoming the secret sauce of corporate efficiency. Also, as with business intelligence (BI), tech innovation is leading the way. Whereas with BI we’re speaking of storage and database technology needed to handle data sets of a size that was unimaginable less than one generation ago, we’re talking about a confluence of technologies all just now coming into commercial view.
Advances in chip-scale technology now allows us to put radar in a computer “chip” that’s 8mm square and entire computing systems into a package the size of a postage stamp. Progress in the area of advanced materials have made it feasible to harvest enough energy from the local environment to make these systems completely autonomous, never needing a battery or maintenance of any kind. Sending data over-the-air (telemetry) is easier than ever before and obviates the need for wires and the associated expense and unreliability. The list goes on and on and continues to grow. Deploying these new capabilities in the field is the heart of the Internet of Things (IoT), and indeed the heart of a smart city. Data about traffic patterns, parking lot use, accidents, road hazards, micro climates, etc. can now be funneled into capable BI systems and analyzed to produce the secret sauce of any smart city. The benefits are clear and plentiful, from better utilization of snow clearing resources to mitigating
As we move to self-driving cars, trucks and busses, these benefits will compound because the robotic chauffeurs always have access to this data and can adjust, unlike humans, in real time. This is all well and good for mechanized transport, but what about HPT? How do we fit these most vulnerable components into a digital ecosystem when their inputs and outputs are primarily analog?
Inputs and Outputs
If you’re reading this, there’s a pretty good chance you’re
familiar with Neo and Mr. Anderson of “The Matrix” fame. Someday in the future
we’ll have the capability to “plug in” directly to the internet (or whatever it
becomes at that time) as easily as we plug a monitor into a computer today.
Every 2-way communication interface ever in existence has two fundamental
components: inputs and outputs, or I/O for short. As I type this document, the
inputs are the keys on the keyboard, and the terminus of the outputs are the
words that I’m reading on the monitor. For a USB interface, the I/O
collectively are the 5 electrical signals necessary to implement the complex
protocol that makes it possible for me to use this keyboard with the computer
in the first place. Electrical engineers at times use the word “talk” to
describe when such an interface actually works. We use that word because that
is our most important and most familiar I/O.
Humans have astonishing mastery of natural language processing
(NLP) given how insanely difficult this is to program into a computer. In part
because this is so difficult, and in part because speaking to a smartphone in a
crowd can be disruptive and a potential invasion of privacy, we resort to a
visual interface which has served us well for the last several decades – the
so-called “gooey” or GUI (graphical user interface). But as we all know, the GUI
has some drawbacks, one of them being that it occupies our sight which is very
important to our ability to process our surroundings. Texting while driving has
been an issue since the beginning of the handheld GUI, and now that issue has
spilled into the HPT community. Cities around the world struggle with this,
particularly those in Western Europe and Scandinavia where cities are very dense,
and bicyclists outnumber automobiles by 5 to 1. Some cities in Germany are
resorting to placing traffic signals in the sidewalks to alert pedestrians with
a downward gaze that they’re walking into potential danger. For the HPT
community, what we propose is an MMI (man-machine interface) that leaves our
eyes free for processing the critical visual information of our surroundings
but doesn’t rely on problematic NLP.
So if we don’t have speech for
output, or a GUI for input OR output, what’s left? Humans have limited options
for input. Taste and touch are likely not granular enough (ie,
cannot transfer complex information in a reasonable amount of time or effort,
one example for touch being Morse code) to be terribly useful. Another sense at
our disposal is hearing and the ability to process speech very well. We’ll
explore the interface between humans and computers next.
The Man-Machine Interface (MMI)
The MMI is an age-old problem that harkens back to the very
beginnings of computing. From punch cards to Siri, fueled by Moore’s law the
state-of-the-art has advanced dramatically over the course of a single
generation. The current generation is on the cusp of a transformation to a
fully speech-based MMI where computers are fully Turing-test compliant, even in
situations where the signal-to-noise ratio is extremely poor (either because
ambient noise is high or because broadcasting the conversation is not
desirable) or supplementary external computing resources are not available. In
popular culture, these days are depicted quite well in the film “Her” where the
streets are filled with pedestrians that are only talking to their devices
while oblivious to each other. Or when the Enterprise crew of Star Trek fame
travels back in time to the 21st century and Mr. Scott picks up a
mouse and tries to speak to it.
1: MMI Options*
* Note: traditional graphical/pointer I/O and variants not listed due to eyes-free requirement. Whether unfortunately or not, those days are still several years away and in the meantime we have converged on a graphical/capacitive touch or mechanical button-based interface when the speech processing conditions are not absolutely ideal. This is problematic for the HPT actors because they are almost always in these conditions when it really matters. Luckily there are a lot of alternatives, many of them highlighted in Table 1: MMI Options. *
But largely due to the drawbacks listed, most of these options are not well suited to the needs of the HPT either. Using our present vantage point as context, we can see some of the requirements for the “system” (referring now to the smart cities infrastructure) as follows:
1. The system shall be HPT-aware.
2. The HPT shall push position, speed, heading, and mode to the system in real time.
3. The system shall be able to push relevant known hazards to the HPT in real time.
4. The HPT elements shall be able to push first-hand knowledge of hazards to the system in real time without compromising situational awareness.
5. The HPT elements shall be able to extract pertinent and configurable information from the system in real-time without compromising situational awareness.
6. Tying into the system shall not pose an undue burden to the consumer, either in cost, size, convenience or complexity.
7. Function of the I/O between HPT and the system should not be affected by environmental conditions.
8. At least to the near future, the HPT will not be able to rely exclusively on the system for real-time and actionable intelligence on immediate and potential hazards.
Given these constraints, our empirical and academic research has led us to the conclusion that a magnetic-sensor-based solution for human output/machine input and audio for machine output/human input is the best MMI for the HPT actors in the smart cities use case. It meets the requirements of no false positives, configurable and essentially unlimited functions, is not affected by adverse environmental conditions or the associated protective equipment, and is eyes-free. The necessity of a magnetic actuator has turned out to not constitute an undue burden, and proof-of-concept has been achieved.
The Sensor Suite
As an equal participant in the smart cities ecosystem, the information delivered to and harvested from the system will be very similar to that of the mechanized counterparts, but there are bound to be differences. For example, it wouldn’t make sense to put backup sensors on a bicycle, whereas they’re indispensable for a panel van. We will be exploring several sensor technologies to keep the HPT elements (cyclists, pedestrians, wheelchairs and now scooters) in sync with the smart cities infrastructure and out of harm’s way. Discovery and innovation will likely modify this list as time goes forward, and we will be keeping a watchful eye on emerging capabilities.
Lidar – power and compute intensive but has awesome resolving power for distinguishing a pickup truck from a sedan (for example)
Radar – also now available in IoT compatible form factors doesn’t have quite the resolving power but is not affected by weather conditions
UWB – RF technology just now being commercialized that works outside the ISM (industrial scientific medical) band to get very accurate proximity (on the order of a centimeter), but unlike Lidar needs a transmitter and receiver
BTLE 5 extended range RSSI – low rate long range data transmission will allow signal strength range measurements up to 200m, and although the absolute measurement continues to be somewhat inaccurate, relative measurements will allow determination of closing speed in time to do something about it
Audio – we will be using microphones to gather audio data and DSP (digital signal processing) methods to discriminate and enhance important audible clues, such as sirens, barking dogs, honking horns and such
Aural spatialization – an array of microphones is used (rather than a single mic) to enable the user to not only be aware of the sound but be aware of exactly where it’s coming from (listen to the virtual barbershop simulator)
GPS – of course this will play a role but we all know that it relies on line-of-sight to at least 3 satellites, not always available in urban environments – available in the phone already
WiFi – triangulation of wifi signals picks up some of the GPS slack in cities – available in the phone already
Inertial, magnetometer, gyroscope, speedometer – this will allow real-time system awareness of sudden changes in speed and heading, what mix of these will be needed remains to be seen
IR – infrared (night vision) cameras are just now becoming available to consumers, but smaller form factors are still a ways off, and this is a bit outside of our wheelhouse and would be better suited to AR (which actually is a form of wearable tech, just not one we’re currently working on)
All this is for naught if the participant can’t maintain focus while interacting with the system – that is, the gestr is a required component
Telemetry standards for bidirectional communication between the infrastructure and participants are still evolving and keeping abreast of this is critical, which is in part why we chose an RF module that also supports 802.11x.
In the short term the emphasis will be on collecting data regarding HPT habits and this is what we’ll focus on as well (aside from some very low-hanging fruit)
As the most vulnerable and fastest growing transportation mode, it would be a serious mistake to not fully consider HPT when deploying smart cities technology. Perhaps due to a lack of awareness, these modes are not being given an equal place at the table. Our work at Gadgettronix has demonstrated that transforming “dumb” bikes, scooters, wheelchairs and pedestrians into “smart” ones is technologically and economically feasible, and we have put forth a plan to make that a reality.