The Dawn of Ambient Intelligence and Value Creating Innovation
Abstract
A high-frequency trader settles option derivatives with a seller from his tablet through nanosecond differences, taking advantage of latency optimizations in link latency and operating system optimization algorithm processing latency and without connecting the co-located exchange server and several intermediaries in between. Or, a wind turbine machine is continuously learning the flow of demand and, thereby, allocating capacity and assigning resources directly to the distribution point at the EV charge station without multiple intermediate hoops. And, the whole encounter lasts barely couple of seconds. These are not “overfitted brain hypotheses” of some academicians. We see the term ‘ambient’ applied to concepts like customer experience. It can be confusing to know what this means in a material sense. The wider definition of ambient music helps draw the connection. Ambient music, unlike background music, supports a spectrum of focus, from being completely ignorable to helping draw attention to the environment around us. It does so by enhancing acoustic and atmospheric idiosyncrasies and, is intended to engender calm and to provide a space to think.
Ambient intelligence moves the concept into the business realm. User experiences driven by ambient intelligence put a new perspective on what consumers do in the real world, by providing them with low intrusion, minimum-friction opt-in features that make their lives easier and more interesting. Stanford University defines the concept as “the field to study and create embodiments for smart environments that not only react to human events through sensing, interpretation and service provision, but also learn and adapt their operation and services to the users over time.” This trend to move Artificial Intelligence (AI) and machine learning (ML) processing from centralized Cloud processing platforms to the Endpoint is motivated by a variety of reasons including high frequency throughput, bandwidth constraints, cost of energy, the availability of connectivity and, above all, data privacy. The need for efficient AI inference at the embedded endpoint demands efficient endpoints that can infer, pre-process and filter data in real-time. This allows companies – from high-frequency trading in financial sector to solar and wind farms in renewable energy sector – to optimize device performance and analyze the respective application specific data points directly at the endpoint, while avoiding all the aforementioned constraints. It is particularly effective for abnormality risk detection, prediction of human behavior, optimizing costs, allocating capacity, assignment of resources and the state of things, and it has already begun to be applied.
What is now referred to as ambient intelligence had its origins in a field called ubiquitous computing – a term coined by its “father,” Mark Weiser. He proposed a model where humans will not interact with just one computing device but rather with many, in the form of a set of dynamic, small, networked computers – often invisible and embedded in everyday objects. Today, we recognize the shoots of this idea bearing fruit in our serial use of multiple devices like smartphones, tablets, laptops, and consoles, where consumers begin a transaction on one device, progress it on another, and complete it on another still. We also use these devices in parallel, and the past couple of years have seen experimentation in the form of second-screen experiences for TV shows, gaming, and customer support experiences.
The challenge for the semiconductor industry isn’t merely to catch up to where Taiwan is today, and China plans to be by 2025, but to meet them at the demand of the future – or go further.
Until recently, most artificial intelligence (AI) and machine learning (ML) processing was run on the Cloud. However, in recent years, we have seen AI processing expanding from the Cloud to the Edge and Endpoint – an object lesson in high-class decision-making under pressure. Moreover, the current chip shortage has put a spotlight on supply-chain vulnerabilities in the U.S. and forced many companies to reconsider their business strategies. Inability to meet demand for chips is playing a role in surging inflation.
Our smart gadgets, however, are just the beginning. For ambient intelligence to really flourish, we require many more objects to be digitally enabled and for these objects to listen to and respond to the environment and each other. The midterm will see the Internet of Things (IoT) enable inexpensive networking and connectivity, to be applied to everything from stockroom crates to tables and children’s toys. The protocols supporting this pervasive communication are being enabled by mobile signals, Wi-Fi, NFC tags, and Bluetooth Low Energy. Ambient intelligence is not just one technology; it is a coalition of many working in unison.
What do we mean by Edge AI and Endpoint AI?
Edge AI moves AI and ML processing from the Cloud to powerful servers at the edge of the network such as offices, 5G base stations and other physical locations very near to their connected endpoint devices. Edge devices live at the edge of the network or cloud edge before wireless signals get to endpoint devices. By moving AI/ML compute closer to the data, we eliminate latency and ensure that all of that data’s value is retained.
However, Endpoint devices are those which are at the final end of a communication link; endpoint devices as physical devices connected to the network edge – from sensors to smartphones to tablets to smart gadgets and beyond. As so much data is generated at the endpoint, we can maximise reasoning, inferences and the insight we gain from that data by empowering endpoint devices to think for themselves and process what they collect without moving that data anywhere, or should we say, selectively move only relevant data to the Edge and Cloud. TinyML, that focuses on the optimization of machine learning (ML) workloads, is an emerging sub-field of Endpoint AI to enable ML processing in some of the very smallest endpoint devices containing microcontrollers no bigger than a grain of rice and consuming mere milliwatts of power.
Endpoint AI has great potential to create new and deeper benefits across areas including high-frequency trading, renewable energy generation and distribution, medicine, education, security, and a range of other applications affecting nearly all aspects of our daily lives.
In high-frequency trading, for example, trade repository data from multiple sources to measure three types of contracts: interest rate, credit, and foreign exchange derivatives, the Endpoint AI will be able to study how liquidity shocks related to variation margins propagate across the decentralized networks and translate into payment deficiencies across different derivative markets. The inferences and insights, in extreme theoretical scenarios where liquidity buffers are small, financial institutions may experience significant spillover effects due to the directionality of their portfolios. Endpoint AI, if designed well, will make derivative markets safer by reducing systemic risk and improving counterparty risk management.
Another key function of Endpoint AI is sensor fusion: combining the data from multiple sensors of renewable energy to create complex pictures of a process, environment or situation. Consider a wind turbine as Endpoint AI device in an renewable energy application, tasked with combining data from multiple sensors within a farm to predict when mechanical failure likely to occur, correlate with demand generation and optimally assign distribution to the other Endpoint AI – the EV charge station. These Endpoint AI devices learn the interplay between each sensor and how one might affect the other and apply this learning in real-time to improve end-to-end throughput, at significantly low cost.
Think of an autonomous driving system, in another example, that can detect and avoid collisions by recognizing objects or take over from the driver in an emergency. Or a portable scanner which can detect medical symptoms in areas that don’t have readily accessible healthcare infrastructure.
There are huge amounts of real-world information that require a machine learning artistry to interpret and act upon at the Endpoint – touchpoints – in the journey of service experience. The only viable and cost-effective way to analyze such massive amounts of data, images, movement, or video, with minimal delay, is through Endpoint AI and embedded ML inside Endpoint-AI chipset. Some of the models, algorithms and applications employing Endpoint AI and embedded ML thereof, that are currently running at the Cloud or Edge, by mobilizing data and conducting them like an orchestra, can very well be on Endpoint devices.

Centralized - Decentralized Computing
One of the most abiding models for understanding a complex system such as this is an associate model. These models are popular in artificial intelligence and are able to support decentralized, self-learning systems that have goals such as those outlined above. This model sets up a circular cycle: sense, think, and act. Customer experience professionals should view ambient intelligence from this framework and consider the following:
1. Sense: Ask permission for unobtrusive listening and interaction with the consumer’s environment. Today, the explosion of smartphones kitted with a slew of onboard sensors provide the most accurate “remote viewing” of people, objects, and places. Context is the key for relevance and requires lots of data signals, so the goal for experience designers is to obtain permission from customers to sense their location, orientation, identity, nearby temperature, and other environmental, social, and emotional cues. Interaction design supported by onboard cameras and microphones will provide insights into consumers’ moods, as well as what they are saying and doing.
The rapid growth of the Internet of Things (IoTs), has been the catalyst for an evaluation of how best to deploy and arrange assets, influenced by factors including network availability, security, device compute power and the costs associated with sending data from Endpoint to Cloud or storage repository, which has seen a shift back to distributed computing models.
Often every second counts in these scenarios and they can be made possible only if Endpoint devices are faster, smarter, more secure, and more reliable. These Endpoint AI-based applications can sometimes sound futuristic but are closer than many consumers realize.
2. Think: Spend the majority of time thinking on how you can provide value now and in the near future. Experiences will not be ambient if users are presented with a bazaar of bleeps and messages that sound like a Tokyo high street. Therefore, ambient intelligence mandates restraint and relevance. The majority of your development will be spent on models that hypothesize how your brand can help consumers – not sell products to them. Today, web analytics are maturing to multi-signal, sense-making tools fueled by Big Data. Additionally, business logic is expanding to work alongside algorithms and knowledge-expert systems developed for artificial intelligence to realize the goal of understanding context and relevance.
The use AI/ML at the endpoint for automation of processes reduces the potential for errors, delays, operational and transactional costs, and mental-metrics of ground-level staff, leading to a more transparent, more accountable system. And, in order to see the benefits of these applications though, they require low power and high-performance compute in tiny devices.
3. Act: Passing messages to the user only when they are hyper-relevant and allow users to train your systems easily. Experiences driven by ambient intelligence need to react quickly to events or signals from the user. The systems also need to proactively support via pushes and alerts. Consider Smart watch, with its subtle conveyor belt of alerts about locations, meetings, and other augmentations of the real world that consumers can either ignore or act upon. Users must be able to calibrate these push messages to train the system to respond appropriately.
Estimates for this new dawn have been penciled in for around 2020, but the technologies needed to deliver it are all present today – albeit unevenly distributed.
Endpoint AI-Chip
For ambient intelligence to flourish, it requires different kind of AI-Chip, different software tools to design them and different types of talents – from modelers to engineers to developers – to move them from inspiration to prototype and production quickly. To find the best way to accelerate AI/ML performance at the endpoint to keep up with requirements from cutting-edge neural networks, there are many startup companies springing up around the world with new ideas about how this is best achieved. They are attracting a lot of venture capital funding, and the result is a sector rich in not just cash but in novel ideas for computing architectures.
This need for streamlined Endpoint AI-Chip is all the more urgent because demand for new AI/ML designs is inhibited by the lack of simplicity of use in the tools and ecosystem. As with many modern technologies, ecosystem collaboration is crucial to making it easier for developers to deploy endpoint AI. With so much hinging on low latency, one of the latest technologies being leveraged to improve high-frequency throughput is FPGA (field programmable gate array), which is a reprogrammable chip that allows for the ultra-low latency processing of complex algorithms. The FPGA hardware’s parallel architecture makes it a solution for reducing round trip latencies in receiving exchange data and executing trade orders.
Nonetheless, there are trade-offs between hardware-driven and software-driven designs. To acquire high-frequency throughput for end-to-end round-trip latency, in the first place, one needs very efficient neural networks models and algorithms in multi-layered network architecture. High frequency throughput and multi-layered network architecture implies the use of ultra-fast network communications, high-performance messaging, specialized design tools, and operating system optimization, like kernel bypass networking.
What benefits will ambient intelligence bring to the customer experience? Future users won’t have sore thumbs from jabbing at screens as we do; their technology will be hidden away, continually listening to them and anticipating what they need. They will train it like a pet through their voices, faces, and gestures—and they’ll see our continual button-clicking as efficient as starting a fire with a flint.