Real-time OS and Linux in Embedded Systems
When considering the configuration of an embedded system, which OS to adopt is very important in the early stages of development.
In particular, it depends on whether you are thinking mainly about the time constraints or the software, such as middleware, to be used. Generally speaking, a real-time OS is better suited for time-sensitive systems. On the other hand, if you need networking, file systems, or advanced graphical displays, Linux is said to be the way to go. Of course, there are many factors, such as cost and development time, that make it difficult to decide how far to compromise on requirements, and there is no right answer.
However, when you consider the combination of paid and free software, the above configuration is not always the case, and increasingly there is a mix of them. In the case of real-time OSs, the number of things that can be done on a single chip is increasing due to the improvement of CPUs and the performance around CPUs, and in the case of Linux, a storage device (flash ROM, eMMC, etc.) corresponding to large memory and storage is required, but future expansion and common platforms, such as The idea is also widespread, so you can enjoy the cost advantage.
In addition, the size of the board and other factors are also an option, but these areas vary greatly depending on the shape, design and specifications of the product.
Guidelines for choosing between a real-time OS and Linux
Drawing of graphs while measuring with high accuracy
Now, I’d like to consider a simple guideline for deciding whether to use a real-time OS or Linux when considering the configuration of an embedded system under the above conditions.
(1) Processing speed
Measurements should take 1000 samples of data within 1ms.
(2) Device Drivers
The measured data will be averaged over 100 samples and stored in storage.
(3) Processing speed
Display graphs from data stored in storage.
In these cases, you might think that it would be better to use a fast microcontroller, don’t you? I think the reason why it seems that way is because it specifies “processing within a certain time”. Let’s look at the concept of implementing this on a real-time OS and the concept of implementing it on a system such as Linux, for each guideline.
(1) Processing speed
First of all, you can see that it takes 100ns per sample data to perform a process that meets these conditions. That is to say, the system that takes more than 100ns for one program execution cycle, you will find that the system will soon fail. By the way, 100ns is the minimum requirement to run at 10MHz. In this area, it is still sufficient for microcomputers. in Cortex-M0, there are a lot of things that work around 40MHz, so it becomes 25ns per clock, so we can manage to work around here.
However, it is not realistic to run Linux at 40MHz when I run this on a Linux system…. It seems to be able to imagine that it is difficult to run Linux unless it is a system of 400MHz or more CPU at least, and the environment of several hundred MHz is necessary. It’s not.
It’s only a minimum estimate, so there are actually other conditions that overlap, etc., and this is it! I can’t be certain that this is the case, but if I were to venture to present it, it might be as follows.
- If you’re going to sample data and do the math, it’s Cortex-M3/4, running at 40 MHz or higher (I’m sure there are lots of them).
- For Linux, a Cortex-A series 400MHz or higher is a prerequisite.
- You can use it with dual-core or hetero-core.
【supplement】If there seems to be a lot of choices, it goes without saying that the next criterion for selection is “power savings” or the number of peripherals, right? Let’s choose well.
(2) Device Drivers
The next thing to focus on is the device drivers. These days, CMSIS drivers are available almost as standard, especially for Arm microcontrollers. Every device vendor knows that the drivers for peripherals are important. I think it has become very convenient. In the past, I used to make my own, but I don’t make them as often now.
Now, why are device drivers so important? I don’t need to go into detail about this, but I think it’s important to make good use of the DMA in the SoC to improve system performance, because if you can use DMA, you can achieve high performance while distributing the load on the CPU. Even if the data is processed in 100ns per sample as mentioned earlier, if DMA transfers the sampled data to the specified memory, the CPU can be clocked more slowly in time.
If you don’t have a Linux driver. It depends on whether or not the device vendor has one available. It’s rare that they don’t have a driver in recent years, but whether they can support all systems is a different story. The memory map in particular may change depending on the board configuration, and this is something you need to understand in order to get the most out of DMA, and although it may be a rare case, you may find yourself in a situation where you have to tweak the device drivers.
Also, most peripherals support DMA, so there may not be much need to make it up, but whether it’s fully implemented is another story. The control of FIFO, which is the timing of interrupts, should also be specified in the application, so it’s not as easy as just having a driver to do everything. But it’s a lot easier than creating a driver from scratch, so don’t just call the API, read the source code carefully before using the provided driver.
In the case of a real-time OS, a driver may be available on the real-time OS side, but there are currently no drivers available for special peripherals. Real-time OSs provide a minimum number of drivers for the kernel to run, but other drivers are user-implemented. Instead of being able to build the OS freely, you have to work for it. You will then use the drivers provided by the device vendor, but since it does not take into account the integration with the real-time OS, you should expect to implement them with modifications.
Device drivers can be character-type devices or block-type devices, depending on the peripheral, so it’s good to take into account the time it takes to load the source code to see if it fits with your system when designing it.
(3) Drawing process
There are many ways to draw, but the most important resource needed for drawing is memory. It is not the memory of the flash ROM. It is the SRAM or DRAM (DDR) that is used in the working area. Since the display is important to the appearance of the product, how quickly can it be drawn? This is important, and it will increase the value of the product. For this reason, it is usually necessary to have at least two memories of about the same size as the screen size to be displayed to achieve fast rendering. In this case, a certain amount of memory is required.
For example, in the case of a monochrome VGA display, the size is 640×480, which can be calculated as 640×480 / 8 = 38.4K. In the case of monochrome, 38.4KBytes is needed for two. 8-bit (256 shades) in the case of R, G, B color display, 640x480x3=460.8K, which is 460.8KBytes. The screen buffer alone is likely to be 1MBytes. Recently, microcomputers with high capacity SRAM are increasing, but compared to the processor, external SPI flash and DDR BOM price and the single chip microcomputer with high capacity SRAM, there seems to be no big difference.
If the difference in hardware is to disappear, Cortex-A based SoC is the better choice. If you can use high-speed memory such as DDR, you can display a larger angle of view (reference: memory size by angle of view).
If you look at it this way, you can see that it takes a lot of memory to display it, too. On the system, I think these resources are treated as “frame buffers” and are reserved for these resources.
Now, the problem with choosing a Cortex-A system is that the variation in real-time operating systems is all the more difficult to find. Of course, there are a number of solutions that are supported, but the majority of them are paid systems, and the examples of Cortex-A in a free real-time OS are limited to a few TRON kernels.
If you choose to use a Cortex-M system, such as a microcontroller, a real-time OS may be an option, but there are concerns that the rendering process and libraries may be insufficient. Some microcontrollers have a drawing engine. If you just want to display the display, you don’t need to have the memory shown in the table above to do it, because you can just keep throwing data to the LCD panel controller and it will display it. However, for a smooth display and fast drawing, it is important to use the same memory size as the display size and to process the data continuously by DMA, so this also depends on the performance of the device driver. Recently, microcontrollers with a built-in LCD controller are increasing, so if the drawing process is a simple library of points and lines, it is not so difficult to implement.
Since we’re talking about hardware too much, to talk about software, when displaying a graph, it’s important to draw an axis that can’t be moved and to update the data to be displayed at regular intervals. The finer these intervals are, the smoother the drawing will be, but if they are sluggish, the display will be disappointing. This is a very difficult balance to achieve, but it seems to be a lot less work than redrawing the entire image.
By specifying the area to be drawn and periodically rewriting the update data in that area, we can implement a graph display. The general approach is to narrow down the memory area to be updated and rewrite the data to be drawn. When displaying a graph, the position and coordinates of the plot are moved at equal intervals in the display area, or lines are connected. The layer function makes it easier to design the drawing in this area.
What should I choose in the end, a real-time OS or Linux?
I’ve been writing at length, but I don’t know which one to choose anymore (laughs)… If I were to summarize it briefly in APS terms, how would you feel about these guidelines?
●If you use a real-time OS on a microcontroller, the clock is faster and you need more external external memory.
●If you’re using Linux on the processor, is it possible to work with the driver’s DMA for the parts of the driver that are needed for real-time processing?
●Is it worth the cost in time and money to learn the device and software?