Home News How to face the challenges of implementation in the space of flows...

How to meet the challenges of deploying in the AI-powered workflow space

multicore processor

By Thomas Guillemain and Thomas Porchez, Teledyne e2v

Of the more than 6500 operational satellites currently orbiting the planet, at least 1000 of them perform some form of Earth observation (EO) work. The images obtained from these activities can serve a wide variety of purposes, many of which will have ecological or sociopolitical benefits. Thanks to advances in imaging technology, the level of accuracy that can be achieved today is continually improving. Consequently, the field of applications that can be carried out is broadening and the quality of the results is increasing. Unfortunately, this is creating problems in other industries: as the resolution of images and the amount of data increase, communication bottlenecks begin to develop. In some cases it may be necessary to process data from hundreds of satellites which, again, is problematic as there will be too much material to examine unless it is sorted beforehand and the excess filtered out.

In the past it was possible to transmit relatively small amounts of data directly to Earth and then process it through specialized data centers. The migration from those data center infrastructures to cloud-based platforms, along with the evolution of the sensor technology used (with the specification of more sophisticated and higher resolution devices), have meant that downlinks are simply not there anymore. up to the task, as they cannot scale with the increasing demands for data. Consequently, a totally new approach is needed.

Moving the processing closer to the source, in the same way as the edge computing systems that are beginning to be implemented in terrestrial communication networks, will allow some important improvements to be achieved. Firstly, it will no longer be necessary to classify all images in the field, as only valuable images will be transmitted. Secondly, it will allow us to overcome the enormous limitations of bandwidth that are beginning to be observed now (as just pointed out). Third, the response capacity when a situation arises will be faster. This could be important when dealing with different forms of natural or man-made disasters, as it will be possible to identify them more quickly so that emergency services and aid organizations can be informed more quickly (and therefore Thus, more lives can be saved).

on board processing

For the reasons just outlined, there is tremendous interest in moving from a centralized architecture to one based on edge computing when it comes to EO work. Instead of sending everything back, having more processing capabilities on the satellite itself, there is the possibility that the data obtained is interpreted right there, to later decide on its relevance. This would place a much lower load on a satellite downlink, and would also mean no power wasted on transmitting data that would not be useful.

If it is determined that the images obtained present elements of interest and deserve further analysis, or show indications of something that must be reacted to urgently, their transmission will be clearly justified. Conversely, if the material is shown to be inconsequential, the need to stream it disappears and no bandwidth is wasted.

Essential processor features

Any type of semiconductor technology intended for use in space requires attributes that go far beyond what is expected for conventional application scenarios. Once the hardware is in space it cannot be repaired, so any damage or malfunction would jeopardize the mission. Components must be resistant to the intense shock and vibration forces they will be exposed to during launch, as well as the extreme temperatures that occur as they pass from the side of the sun to the side of darkness during their orbit.

They must also be strong enough to withstand radiation exposure. Ions striking processor devices can cause single event latch-ups (SEL) and single event upsets (SEU). In addition, the total ionizing dose (TID) must be considered, since the useful life of a device can be shortened by this cause. In order to ensure that a specific processor will function for a long time once deployed in space, and that malfunctions will not occur, extensive radiation testing is mandatory.

There are also other points that should not be overlooked. The satellites have very little space to house all the necessary electronics. They also have a limited energy budget (based on what their photovoltaic cells can generate). Finally, the "New Space" community generally does not have huge financial reserves. Project costs must be kept under control, so selected devices must be priced appropriately.

Case Study Example

Swiss-based space systems integrator Beyond Gravity is currently developing a high-performance processor platform that will enable real-time data processing on low Earth orbit (LEO) observing satellites. The Lynx platform must have superior computing power while not demanding too much from the available energy budget. It also has to be robust enough to withstand long-term operation in space.

Based on the different aspects outlined above, the company needed a radiation-resistant processing solution on which sophisticated AI algorithms could run. This had to be done using only the minimum of power, without taking up too much space on the board, and without too high a price tag.

Consultations with Teledyne e2v personnel proved fruitful, leading to the choice of one of the company's processor solutions. Using off-the-shelf processing technology (COTS) and then applying extensive testing to select the best performing units, Teledyne e2v is able to offer processors that are more cost effective than custom built solutions.

multicore processor
Figure 1: Teledyne e1046v LS2-Space Radiation Resistant Multicore Processor

Designed to withstand the challenging application environment that space represents, yet capable of running at speeds up to 1,8 GHz, the Teledyne e1046v LS2-Space processor is becoming the gold standard for onboard processing in artificial satellites. It is the most powerful space-qualified processor on the market today, outperforming competitive solutions by more than an order of magnitude.

Thanks to its multi-core processor architecture, made up of four 72-bit Arm® Cortex® A64 cores, it can deliver 30k DMIPs of processing performance. Other features included in this device include a highly efficient DDR4 memory controller with built-in 8-bit Error Correcting Code (ECC) to mitigate the threat of data corruption, as well as a 2MB L2 cache that caters for all their processing cores.

This processor is supplied in a 780-ball BGA package and its dimension is 23 x 23 mm, so it takes up minimal space on the board. To allow integration into a wide variety of different system designs, this processor also includes a wide range of interfaces. These include 10 Gbit Ethernet, PCIe Gen 3.0, SPI, and I2C. 72-bit bus width (where 64 bits are dedicated to data and another 8 bits are allocated to ECC). In addition to its processing capabilities, the LS1046-Space processor offers exceptional robustness, with NASA Level 1 and ECSS Class 1 ratings, with an operating temperature range of 55⁰C to 125⁰C.

The processor is paired with Teledyne e4v DDR04T72G2M memory, which is radiation resistant 4 GB DDR4 memory that uses a multi-chip package (MCP) configuration to significantly raise density levels. Both memory and processor devices have passed 100 krad TID tests, which means they have a longer lifespan. They have also achieved a radiation tolerance of 60 MeV.cm²/mg in relation to single event latch-ups (SEL) and single event upsets (SEU), so their functional integrity is guaranteed.

Software-wise

To complement this radiation-resistant space-grade hardware, Teledyne partner Klepsydra developed embedded software for the Beyond Gravity Lynx product. This software has been highly optimized for applications with limited resources. Thanks to its proprietary parallelization technology, the software can handle complex AI workflows with minimal power consumption and avoiding data loss that could otherwise lead to mission failure. Running the Klepsydra software package on the LS1046-Space processor allows for a 50% reduction in CPU load. This is combined with a threefold increase in overall throughput, plus a dramatic reduction in latency. The performance of the software was measured using AI to identify points of interest within the captured images, and for cloud detection. This last algorithm is a very important research area, as it is able to determine if the cloud cover is too high to make it worthwhile to send the captured images.

Among the numerous Earth observation applications that can be addressed with this technology are the monitoring of deforestation or urbanization, smart agriculture, cloud detection, as well as the recording of glacial movements, the study of floods and forest fires. , the monitoring of military activities, etc. It could also be used to provide an early warning system in the event of life-threatening incidents, such as tsunamis, for example.

Conclusion

The application of edge computing principles to equipment deployed in space will mitigate the problems related to downlink bandwidth constraints, since only data of true value needs to be transmitted. Harnessing the power of AI to carry out processing at the same source will lead to a much more efficient workflow and enable better informed decision making.

Although reliability has always taken precedence over performance in space-based processors, today you need both at the same time. Engineering innovations, such as those described above, are bringing the processing capabilities of ground systems to space applications. Thanks to the collaboration that Teledyne e2v, Klepsydra and Beyond Gravity are carrying out, it will be possible to develop a new generation of satellites and spacecraft. These will have the necessary processing power to run complex AI algorithms, which will translate into higher levels of autonomy needed for decision-making on the image data being captured, so that operations can be performed more efficiently. efficiently and without excessively straining bandwidth capacity or power reserves.