Sunday, October 11, 2020

Edge computing-where you can see all kinds of things you can't see from the Cloud

"I want to stand as close to the edge as I can without going over. Out on the edge you can see all kinds of things you can't see from the center." -- Kurt Vonnegut

Buzz phrase digitalization of oil and gas is a thing. But it is a thing with some substance. It has legs, as they say in the news biz.

This post takes a quick look at one aspect of the digitalization concept … edge computing. So what is edge computing? There is some disagreement on how to define it, but here are some items that can help us get our heads around it.

///////
Edge computing
From Wikipedia, the free encyclopedia
[ EXCERPTS ]
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.

Definition
One definition of edge computing is any type of computer program that delivers low latency nearer to the requests. Karim Arabi, in an IEEE DAC 2014 Keynote  defined edge computing broadly as all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required. In his definition, cloud computing operates on big data while edge computing operates on "instant data" that is real-time data generated by sensors or users.

Concept
The increase of IoT devices at the edge of the network is producing a massive amount of data to be computed at data centers, pushing network bandwidth requirements to the limit. Despite the improvements of network technology, data centers cannot guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. Furthermore, devices at the edge constantly consume data coming from the cloud, forcing companies to build content delivery networks to decentralize data and service provisioning, leveraging physical proximity to the end user.

In a similar way, the aim of Edge Computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. By moving services to the edge, it is possible to provide content caching, service delivery, storage and IoT management resulting in better response times and transfer rates.

Edge application services reduce the volumes of data that must be moved, the consequent traffic, and the distance that data must travel. That provides lower latency and reduces transmission costs. Computation offloading for real-time applications, such as facial recognition algorithms, showed considerable improvements in response times, as demonstrated in early research. Further research showed that using resource-rich machines called cloudlets near mobile users, which offer services typically found in the cloud, provided improvements in execution time when some of the tasks are offloaded to the edge node. On the other hand, offloading every task may result in a slowdown due to transfer times between device and nodes, so depending on the workload an optimal configuration can be defined.

Other notable applications include connected cars, autonomous cars,[16] smart cities,[17] Industry 4.0 (smart industry) and home automation systems.[18]
source: https://en.wikipedia.org/wiki/Edge_computing
///////

TIP: Google® oil gas "edge computing"
A couple results from the above search …

///////
Forbes.com, May 31, 2019
Moving To The Edge Is Crucial For Oil And Gas Companies To Make Better Use Of Data
Mark Venables, Former Contributor
[ EXCERPTS ]
The oil and gas industry already lives on the edge when it comes to the remote and often inhospitable geographic locations that it operates in, but now it is moving its computing to the edge to gain valuable business insights that can increase operational efficiency and profitability.

What is edge computing?
As of today, there is no standard definition for what is the edge. Wikipedia describes it as pushing the frontiers of computing applications, data, and services away from centralized nodes to the logical extremes of a network. It enables analytics and data gathering to occur at the source of the data. This approach requires leveraging resources such as laptops and smartphones that may not be continuously connected to a network.

There are four primary reasons why computing at the edge is needed in industrial operations—privacy, bandwidth, latency, and reliability. An edge solution achieves privacy by avoiding the need to send all raw data to be stored and processed on cloud servers. Bandwidth and the associated costs are reduced as all raw data is not sent to the cloud. There is no issue of latency when computing occurs at the edge and does not rely on a cloud connection. Finally, reliability is improved because it is possible to operate even when the cloud connection is interrupted.

Edge computing for oil and gas
Edge computing is heralding a revolution in the way that the oil and gas industry operates with a triumvirate of transformations for information, the workforce, and commercial operations.

Edge computing and the distributed nature of industrial operations complement cloud computing. An edge to cloud architecture enables enterprises to take advantage of the operational intelligence needed at the industrial edge, while also allowing augmented big data analytics and broad visualization in the cloud.

Delivering results from the edge
One stumbling block in digitalization that the oil and gas industry faces is a large amount of legacy equipment that is out in the field and still performing. Edge computing offers the possibility of connecting this existing legacy equipment such as analog meters or gauges, as well as standalone processes so they can be digitalized and integrated for a more robust network that can deliver real-time information allowing intelligent decisions to be made in the field.
source: https://www.forbes.com/sites/markvenables/2019/05/31/moving-to-the-edge-is-crucial-for-oil-and-gas-companies-to-make-better-use-of-data/#1d6ebf6259bd
///////

Stratus Blog
Edge Computing
The Journey to Edge Computing for Oil and Gas Companies
by John Fryer October 21, 2019
[ EXCERPTS ]
The oil and gas industry is massive and highly-diversified in its operational characteristics between the upstream, mid-stream and downstream sectors of the industry. Even within each sector, there are distinct differences; offshore gas/oil rigs have a completely different set of requirements to onshore well pads in the fracking industry. However, every sector is susceptible to the boom and bust cycles that have traditionally characterized the oil and gas industry. All of this makes oil and gas ideal for adopting IoT technologies to address a whole range of problems and risks, and to smooth out the ups and downs of the business cycle.

Where are Oil and Gas Companies Today with Edge Computing Adoption?
Stratus recently attended the IoT in Oil & Gas conference in Houston, TX, and it provided an interesting snapshot of where oil and gas is, relative to a lot of the hype that exists around IoT as whole. If there is one common thread, it is that implementing IoT and analytics is a journey, not a project. It involves technology, but above all, people and processes. This was admirably illustrated by Marathon Oil, who described their three-year journey to implement digital oil field automation.

The Role of the Cloud and the Edge
Getting the data from the source to the cloud was a subject of great interest. There was universal agreement that the cloud is the place to conduct deep analytics, particularly where machine learning and artificial intelligence technologies can best be deployed. However, transporting the data from the edge to the cloud has its challenges. About 75% of the end users presenting indicated they were either deploying, testing or evaluating the use of edge computing to streamline their cloud-based analytics. They looked to Edge Computing to help with oil and gas tasks such as collecting data from a single site to limit the number of connections to a cloud. This is particularly important in oil and gas, where there are many remote locations.

The use of edge computing for real-time analytics where latency and round-trip delay would make a cloud-based approach unfeasible was also seen as an important application. There was also discussion about using edge computing to filter and normalize data before sending it to the cloud. This can significantly decrease bandwidth usage and significantly reduce the computing cost in the cloud.

There was universal agreement that edge computing will play a key role in the evolution of IoT deployments in the oil and gas industry. As the data becomes increasingly important to drive business decisions, its value will increase exponentially. Ultimately, being able to capture, store and process data locally with simple, protected and autonomous devices will become critical.

The Edge Roadmap
In summary, it is clear that we are in the early stages of IoT deployments. In Stratus’ recent Edge Computing Trend Report, the primary barrier to edge adoption was lack of education on if, when and how to use edge technology and applications.
 
In addition to the Edge Computing Trend Report, Stratus has materials that can help you figure out where you are now, where you need to go and how to get there. We have a short self-assessment that will tell you what stage you’re at now, and a maturity model that can help you think about the various aspects you need to consider and what you need for successful implementation.

John Fryer is the Senior Director Industry Solutions at Stratus Technologies, where he is responsible for go-to-market strategies and industry initiatives across all the company’s product lines. He has over 25 years of experience with systems and software products in a variety of engineering, marketing and executive roles at successful startups and major companies, including Motorola, Emerson Network Power and Oracle. His experience includes more than 15 years working with high-availability solutions for the enterprise, automation and networking industries
source: https://blog.stratus.com/journey-edge-computing-oil-gas-companies/
///////

BDO
How Sensors and Edge Computing Are Maximizing Oil and Gas Data
February 2019
By Kirstie Tiernan, Managing Director, Analytics and Automation
[ EXCERPTS ]
The big data phenomenon—the massive increases in the volume, variety, and velocity of data—is hardly new. What is relatively new is the ability to digitize physical data via sensors and edge computing technologies. The result is more complex data sets than ever before—but also vast opportunities to convert that data into value, from analysis of rock formations and identification of oil-rich areas and reservoir models that can maximize production, to automating operations making them safer and more efficient. Data volumes are now exceeding 10 Terabytes (TB) of data per day for a single well, which put in perspective, is equivalent to 6.9 million images uploaded to Instagram or the digital data storage required for 22,000 episodes of Game of Thrones.

However, oil and gas companies are leveraging just a tiny fraction of the data available to them. While they don’t have issues gathering the data, they lack the resources to properly manage and explore its benefits.
 
How Can Data Analytics Improve the E&P Process?
Data analytics has the power to transform oil and gas production systems’ fundamental operating models, providing vital information about what has happened and what could happen in the future, as well as insight on what to do about it. Advanced analytics, powered by machine learning, can identity patterns across variables in continual conditions. Machine learning algorithms can comb data for correlations and causalities that can be applied to find bottlenecks constraining production and determine prescriptive action.

Analytics can also reverse declining process inefficiencies, optimize production settings, and increase average production output. This includes anticipating daily and weekly fluctuations in production and getting to the root cause of variations in performance between operator crews.  

In addition to enhancing skills and capabilities, there are various factors essential to harnessing the power of advanced analytics, such as the availability of data, analytics infrastructure, redesigned work, and governance and business-driven agility.

The Data War
The rise of big data has led to increased discussion around data ownership

Traditionally, oil companies have purchased data such as seismic files or drilling logs that contractors gather for their customers. However, more recently, data is captured from oilfield equipment such as rigs, pipes, and pumps—an area of untapped potential for the industry. Cloud and AI systems further complicate the picture when it comes to data ownership, particularly with the use of algorithms for learning. One party may own the learning system, but another owns the resulting data.

Eventually, the rules of data ownership will need to be redefined.

What’s Ahead?
Data analytics is just the beginning of the digital revolution in oil and gas. The insights extracted from data can inform wide-scale transformation, subsequently streamlining operations and spawning new business models.
It’s also enabling the next generation of disruptive technology. Data visualization is a powerful way to quickly understand multivariate correlations, clusters, and outliers, but it’s limited to two dimensions. With the application of augmented reality or mixed reality, data analytics can render 3D simulations, enabling users to perceive and interact with the information in entirely new ways. A mixed reality headset called the Microsoft HoloLens, for instance, can transform the E&P process by allowing remote monitoring of sites.

The Economist recently stated, “The world’s most valuable resource is no longer oil, but data.” We’d argue that oil plus data is the real MVP.
source: https://www.bdo.com/insights/industries/natural-resources/how-sensors-and-edge-computing-are-maximizing-oil
///////
Oil and Gas Blog
Edge Computing- the new Cloud Computing?
Automation
Silke Müller, March 11, 2020
[ EXCERPTS ]
In connection with IIoT or data analysis, the terms cloud and edge computing keep coming up. [According to some] edge computing will be the new cloud computing. Is this true? Let’s take a quick look at both for now.

Cloud Computing and Edge Computing in a Nutshell
Edge computing
The cloud is an IT infrastructure that does not have to be installed locally but is available via the Internet. The cloud provider offers various services such as storage space or application software for rent. The advantage is obvious: the end user no longer has to worry about sufficient storage space, license costs or computing systems where he would have to take care of maintenance.

In edge computing, data is processed directly at the point of origin, i.e. decentrally at the edge of the network – hence the term “edge”. Let’s take a look at an example to see what this means in comparison to the cloud.

Take Yokogawa’s cavitation detection. Cavitation can cause severe damage to pumps and valves by implosion of vapor bubbles that form in liquids under certain conditions. To prevent this, it is important to detect the formation of bubbles as early as possible. Yokogawa‘s solution relies on the evaluation of subtle changes in pressure variations caused by the implosion of bubbles. In order for these pressure fluctuations to be visible at all, however, and thus to be evaluated, the measured values must be recorded with a very high resolution. In this case every 100 ms. In addition, several internal parameters of the differential pressure meter used must be evaluated. That’s where a little bit comes together.

Quick “at the Edge”
If one wanted to perform the necessary calculations in the cloud, this would mean a not inconsiderable data transfer. And a rather unnecessary one at that. After all, all that matters is the result – a measure of the level of pressure fluctuations and an alarm in the event of critical values. In such a case, it therefore makes much more sense to evaluate the data directly on site. In the above example, the differential pressure gauge is connected to a controller that calculates the level of pressure fluctuations. Based on the level of pressure fluctuations in normal conditions, it also determines threshold values for critical levels of incipient and severe cavitation. If the pressure fluctuations exceed the threshold values, a reaction can be carried out without delay, for example by directly switching off critical equipment. In other words, there is no latency in data transmission, as occurs with cloud computing. Edge computing is therefore certainly the more suitable approach in this case.

Overview in the Cloud
When it comes to maintenance tasks and a good overview of the condition of the equipment, it naturally makes sense to know where the cavitation occurs. But also where equipment is at the abrasion limit or a defect is imminent. In this case, a maintenance engineer needs the data of various plant components as well as various parameters in order to be able to plan specifically in which area action is required. Here, too, algorithms can help him, for example, those that determine the service life of equipment on the basis of data, keyword predictive maintenance. However, these are not necessarily time-critical considerations that require the fastest possible response. But the merging and joint processing of data from different sources. And this is where cloud computing is clearly ahead of the game.

So Edge Computing is not the new Cloud Computing after all?
Cloud and edge computing are thus two different areas of application that exist alongside – but also with – each other and both have their right to exist. One will therefore not replace or substitute the other. However, edge computing is expected to see the biggest developments in the near future, as it is not yet as developed as cloud computing. So in terms of hype, it may well be that edge computing will become the new cloud computing.

14 March, 2020
The author
My name is Silke Müller. Since April 2017 I am responsible for Data Science at Yokogawa. I was able to gain experience in handling and evaluating large amounts of data even before I joined Yokogawa.
source: https://www.oilandgas-blog.com/en/edge-computing/
///////

Google® Better!
Jean Steinhardt served as Librarian, Aramco Services, Engineering Division, for 13 years. He now heads Jean Steinhardt Consulting LLC, producing the same high quality research that he performed for Aramco.

Follow Jean’s blog at: http://desulf.blogspot.com/  for continuing tips on effective online research
Email Jean at research@jeansteinhardtconsulting.com  with questions on research, training, or anything else
Visit Jean’s Web site at http://www.jeansteinhardtconsulting.com/  to see examples of the services we can provide

No comments:

Post a Comment