Rhythmic Pattern In Behavior – New Geometric Design, New Metrics

Abstract

Many of our bodily functions, such as walking, breathing, and chewing, researchers believe, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors. Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. Neuroscientists have now discovered (read here) the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.  

The physical design validation and verification of AI/ML models and algorithms in recognition and subsequently debugging them at a later stage is difficult, since most AI/ML models are highly customized for specific clients’ specific functional requirements – especially for machine to learn and perform certain repeatable and replicable behavior – and there is little information about the internal states and signals. It is certainly not like developing “one-for-all” computing chip or a function specific ASIC, or a standardized GPUs with only one type of neural network processor that are currently used in computers, smartphones and other products.

The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex (a brainstem region that may generate respiratory rhythm in mammals), but the exact identity of the oscillator neurons is not fully understood.

In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI) and machine learning (ML). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.

Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.

Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers. “This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

In fact, AI/ML models are rarely “one-for-all,” most companies have unique processes, functional rules and settings and, therefore, need different techniques and metrics for validation, especially for repeatability and replicability, and verification – formal algorithmic, functional simulation, emulation, density, queues, allocation of capacity and assignment of resources – verify that the design would derive reasoning and inferences according to the desired decision-making process.

Moreover, the Register Transfer Level (RTL describes circuits at a level similar to the design description on a schematic: flip-flops activated by fully-specified clocks, and combinatorial logic – ranging from simple gates to large multipliers – between the flip-flops) is only a behavioral model of the actual functionality of the processor, so during this step a set of libraries of available logic gates are used to map the RTL to actual electronic components such as transistors, resistors and capacitors, etc. The chip is manufactured and sent back to the lab for post-silicon validation, where a team of engineers run different test vectors on it. When any real issue found at this point, by modelers and data scientists, has to be sent back to the RTL team for fixing, and then to the design verification team for subsequent validation. After this eventuality, all the above steps will need to be performed again. This iterative process eventually increases the development cost for a highly customized chip.

It is argued that physiologically, two basic coupling principles govern brain as well as body oscillations: (i) amplitude (envelope) modulation between any frequencies m and n, where the phase of the slower frequency m modulates the envelope of the faster frequency n, and (ii) phase coupling between m and n, where the frequency of n is a harmonic multiple of m. An analysis of the center frequency of traditional frequency bands and their coupling principles suggest a binary hierarchy of frequencies. This principle leads to the foundation of the binary hierarchy brain body oscillation theory. This is one reason why it’s important that the AI/ML model including mathematical construct and design verification phases take place at the right time in the project flow.

The questions are: How many oscillations are there, what are their frequencies, what is their functional (cognitive and physiological) meaning, what is the frequency architecture (if there is any) and, – last but not least – how are body oscillations related to brain oscillations? The line of argumentation will be briefly outlined here.

The first argument is that brain oscillations exhibit ‘preferred frequencies’. Different neurons have different preferred frequencies, and different frequencies dominate in different neural regions.

The second argument – which is closely related to the first argument ‐– deals with the functional role of phase and the frequency specificity of oscillations. The superposition of many signals stemming from different sources. When the phase of a task‐relevant oscillation is investigated, the superposition with other oscillations with different frequencies and from different sources causes serious problems. The reason is that the phase of a task‐relevant oscillation becomes distorted due to the superposition with other frequencies in a compound broad band signal.

The third argument refers to the numerical relationship between the frequencies of oscillations. It is an obvious fact that m : n phase to phase coupling (between a slow frequency m and a fast frequency n) is optimal for harmonic (= integer) frequency ratios only.

Optical (Photonic) Processing: High-Throughput-Low-Energy

This technology enables a uniform multi-purpose Endpoint AI design of photonic chips for a range of different applications and performance requirements, as it can be programmed for each specific set of AI/ML models and algorithms after chip fabrication. Optical designed processors deliver increased machine learning (ML) models and algorithm performance – between 107x and 105x leap in AI/ML performance – and help run high-performance computing applications in an endpoint device without constantly being connected to the Edge and Cloud at substantial low-cost energy. In addition, the cost per chip can be dramatically reduced because of the increase in production volume, and rapid prototyping of new photonic circuits is enabled. Essential building blocks for programmable circuits, erasable directional couplers (DCs) are designed and fabricated, utilising ion implanted waveguides.

The hypothesis of distinct frequency domains assumes that in a state‐ and task‐dependent manner, different and distinct oscillations emerge. In some cases they can be detected as clear peaks in the power spectrum in other cases only by their event‐related reactivity.

Prominent examples for spectral peaks are alpha, emerging for example, during rest with eyes closed, but also during different tasks demands such as attention and memory demands. Other examples are frontal midline theta (emerging e.g. during increased and ongoing attentional demands), sleep spindles (emerging after sleep onset), and slow oscillations. In summarizing, preferred frequencies with state and/or task‐specific reactivity are well documented. The hypothesis is that cognitive processing domains are associated with frequency domains represented by center frequencies of traditional frequency bands.

So-called fiber-to-the-processor schemes are not new. And there are many lessons from past attempts about cost, reliability, power efficiency, and bandwidth density. All AI/ML accelerators aim to reduce the energy needed to process and move around data during specific linear algebra steps in neural networks, called “matrix multiplication.” In optical (photonic) accelerators, pulsed lasers encoded with information about each neuron in a layer flow into waveguides and through beam splitters. The resulting optical signals are fed into a grid, called “Mach-Zehnder interferometers” (MZI) which are programmed to perform matrix multiplication. 

However, four different principles of cross‐frequency m : n coupling : (i) power to power (the amplitude envelopes of m and n are correlated), (ii) phase to phase (phase coupling between m and n; also termed cross‐frequency phase synchronization), (iii) phase to frequency (phase of m is associated with a change in frequency of n), and phase to amplitude envelope coupling (phase of m is associated with an increase or decrease in the amplitude envelope of n). 

The frequency architecture is not only defined by conditions enabling optimal phase coupling, but also by conditions enabling optimal phase decoupling to reduce interference between frequencies. Two aspects are important here. One refers to mathematical analyses which document that the golden mean (g = 1.618…..), as the ‘most irrational number’, enables the best possible frequency separation between two frequencies m and n. What this means is edge and cloud are ideal places to utilize machine learning tools to train simpler objects to perform actions that were previously fulfilled by more powerful devices. Endpoint AI will now allow for smaller objects to perform a greater variety of functions with lower latency e.g., from IoTs, tablets, smart wearables, smartwatches, fitness trackers or healthcare monitoring devices to wind turbines and automobiles. It will save resources such as transmission costs and energy costs, with more functions happening on smaller endpoints and will have a massive environmental impact, especially considering that there are already more than 20 billion connected Internet devices today and over 250 billion microcontrollers. Machine learning in endpoint devices enables a new world of Endpoint AI. New use cases are emerging, bypassing the need for huge amounts of data to be transmitted, thus alleviating bottlenecks and latency and creating new opportunities in a variety of operating environments. Endpoint AI opens a world of new opportunities and use cases, many of which have yet to be imagined.