Technical contribution

Availability under all circumstances

Published
Reading time
7 minutes
Share post

The cloud and the Internet of Things are currently trending topics. However, it will not work without a step between the inexhaustible computing power in the cloud and the limited microcomputers in the IoT end devices. This role will be assumed by edge computing, mini data centers at the point of action. In order to fulfill its task reliably, edge computing requires extensive measures for high availability, including robust and compact uninterruptible power supplies.

Particularly small and particularly large structures stand out - such as cloud computing and the Internet of Things (IoT). The cloud is now used by every medium-sized and large company; even privately, most people use a cloud service such as Dropbox, Office365 or Google Docs, even if they do not perceive it as a cloud. A recent study by IT management provider Solarwinds revealed that 86% of German IT experts see cloud computing and hybrid IT as one of the five most important technologies in their company's IT strategy. While the cloud is already past the hype and has become part of everyday life, IoT is currently still considered a technology of the future. But there is no question that both technologies - cloud and IoT - will lead to more and more intelligent, networked end devices in the personal and professional environment.

Generate locally, process locally

Another discussion has been initiated in recent months: What is the best way to process locally generated data? The cloud offers infinite computing and storage capacity, but is only accessible via latency-prone long-distance connections. The microcomputers in IoT devices operate close to the action, but have limited CPU capacity and usually no mass storage. One likely consequence is edge concentrators, small data centers that are located somewhere between the cloud and the IoT end device. In his latest report, analyst Thomas J. Bittman from Maverick Research estimates that the relationship between edge computing and the cloud will shift the focus of data production and processing away from central data centers to the "edge" in the next four or five years.

Future applications in the automotive environment are a striking example. A large amount of data is generated both in the vehicles themselves and in the peripherals. Everything that the car itself needs to function is processed within the vehicle. Other information that is also of interest to the rest of the road users must be shared with the environment. For example, it makes sense for all vehicles on the same stretch of highway to know that a car has triggered emergency braking and has now come to a standstill. Rear-end collisions could be avoided or at least significantly reduced by this information. Sending this information to the cloud first and then distributing it to all relevant vehicles would be possible, but cumbersome and error-prone. What if the network connection is currently unavailable or overloaded? Or if the load on the responsible server is too high and its processing times are longer than usual? For the vehicles in the immediate vicinity of the braked car, it's a matter of seconds or fractions of a second. It is clear that in such a case, it would make much more sense to avoid going to the cloud and back, and this is where edge computing comes into play.

A data center on every corner

Thomas Bittman's main thesis is that growth in the cloud is slowing down and computing power and storage capacity are shifting to the edge of the network. Computers and data are moving closer to the users. These small edge data centers will appear everywhere, along roads, at traffic junctions, in public places or even in larger industrial plants where a great deal of sensor data with extremely high latency requirements has to be processed. Edge computing can be imagined very abstractly as an additional virtualized layer between the data generator and the cloud. It is a local decision-making and processing layer that relieves downstream instances. This layer prepares data according to predefined rules, thereby improving response times, reducing the bandwidth requirement for the cloud connection and reducing the storage capacity required in the data center. This enables applications that cannot be implemented with the cloud alone due to insufficient bandwidth or that generate data that is only relevant on site, in a local context, and does not need to be processed centrally. The more IoT devices populate factory halls, streets and buildings in the future, the more important edge computing will become for the efficient and cost-effective Internet of Things.

Availability over all

Data centers for edge computing must have extremely high availability, as they will also supply security-relevant applications with computing power, such as - see example - in the area of networked vehicles. They are also usually located at remote sites that are only monitored remotely, meaning that errors can only be rectified after a relatively long waiting time. The systems used in them should operate as autonomously as possible and the environmental conditions must be extremely reliable. A far-reaching factor is the power supply. The quality of voltage and frequency are the first things to consider. Germany has a very stable and clean power supply by European and, above all, international standards. Nevertheless, one should not be fooled by the proverbial unlimited availability of electricity from the socket. Energy expert Staffan Reveman knows: "Even if the balance sheet looks good on paper, we must not forget that outages lasting less than three minutes are not included in the statistics. And due to the ongoing upheavals in the energy mix, with increasing volatile renewable energy shares, no one can currently predict how the security of supply of electricity will develop." Grid voltage and frequency are already fluctuating constantly. This is only invisible without measuring devices because the power supply units can deal with fluctuations within certain tolerances and because uninterruptible power supplies use their filter and bridging functions to compensate for the differences. The clocks going wrong at the beginning of 2018 showed how quickly even small changes in frequency can make themselves felt.

IT equipment is even more dependent on voltage and frequency stability. High resistance to overvoltages and variability in the processing of the input frequency usually outweigh an extremely long bridging time. Short dips are absorbed by uninterruptible power supplies via the capacitors in the DC path. The greater the fluctuation range can be, the more universally the UPS can be installed. This also applies to frequency regulation. UPS systems specially developed for use in industrial environments, such as the products from Wöhrle Stromversorgungssysteme, can cope with fluctuations between 40 and 70 Hertz and still maintain the desired output frequency.

Modular UPS system from Wöhrle

Observe service, repair and downtimes

UPSs in hard-to-reach locations in particular need to use batteries with particularly long maintenance intervals. With pure lead batteries, up to ten years are possible - usually for an additional charge - and are worthwhile if you offset the costs of time-consuming service calls. Lithium-ion types, which have a considerably higher energy density, are now also being used. This means that longer bridging times are possible despite their small size. Li-ion batteries also have advantages when it comes to degassing, which must be guaranteed for lead batteries. This is not easy in the often hermetically sealed switch cabinets in industrial or outdoor environments.

The availability of the uninterruptible power supply directly influences the function and availability of the downstream load. Many users believe that the highest possible MTBF (Mean Time Between Failure) is the most important parameter for the availability of a UPS. However, the MTBF is a statistical calculation parameter; it may or may not be correct. For practical use, it is not relevant how long it takes for the UPS to fail, but how long it takes for it to function again without restrictions. Two other factors are decisive here. One is the mean time to recover (MTTR). This indicates how quickly a UPS can be repaired.

regardless of the error. The second factor is modularity and therefore redundancy. UPSs for difficult operating conditions must have a modular design with power modules that can be replaced during operation (hot-plug). Modular design allows the load to be operated with n+x configurations. This means that x modules can fail without the load having to do without the supply from the UPS. Power modules from 10 kW to 50 kW are available from Wöhrle Stromversorgungssysteme to cover outputs up to the megawatt range.

Modularity facilitates service and repair. Broken power modules can be easily replaced with a new module, even by untrained personnel, dramatically improving the MTTR. Service calls are also made easier. While the technician tests a module, the remaining modules take over the protection; bypass mode is not necessary. Although cost is not normally a top priority for high availability requirements, lower operating costs are better than high ones. As a continuous runner, the UPS can be responsible for a not inconsiderable proportion of power consumption. The higher the efficiency of the UPS, the lower the impact on the energy balance. What has so far been a purely economic motivation may soon be a regulatory requirement. "At the moment, we are still generating electricity in Germany using almost 50% fossil fuels," says Staffan Reveman. "We are still miles away from the real decarbonization that the energy transition requires. It can be assumed that the political framework conditions will force us to take extensive energy efficiency measures." Modern, plug-in modular systems with intelligent energy-saving modes can achieve efficiencies above 95% and keep operating costs in check. Because utilization normally has a major influence on efficiency, plug-in modular systems also have an advantage. The power modules allow such UPSs to operate closer to the optimum operating point than monolithic UPSs.

Edge age is just around the corner

Thomas Bittman is certain that edge computing will be the next big trend: "IoT and intuitive, interactive user interfaces will lead a third of large companies to build edge locations or use them in co-location by 2021." This will also require solutions for long-term self-sufficient mini data centers, which in turn require reliable power supplies. Plug-in modular UPS systems for industrial environments are already available today and are ideally suited to the requirements profile of edge computing. With such UPSs, mini data centers can be operated reliably and long-term anywhere.

Read more here:

Download PDF