服務(wù)熱線
13313705507
型號(hào): 1336F-B030-AA-EN-HAS2-L6
分類(lèi): 羅克韋爾 Allen Bradley
聯(lián)系人:何經(jīng)理
手機(jī):13313705507
QQ:2235954483
郵箱:2235954483@qq.com
地址:廈門(mén)市思明區(qū)呂嶺路1733號(hào)萬(wàn)科創(chuàng)想中心2009室
此階段的目標(biāo)是獲取大量信息來(lái)訓(xùn)練 AI 模型。僅原始的、未處理的數(shù)據(jù)是沒(méi)有幫助的,因?yàn)樾畔⒖赡馨貜?fù)、錯(cuò)誤和異常值。在初始階段預(yù)處理收集的數(shù)據(jù)以識(shí)別模式、異常值和缺失信息還允許用戶(hù)糾正錯(cuò)誤和偏差。根據(jù)所收集數(shù)據(jù)的復(fù)雜程度,通常用于數(shù)據(jù)收集的計(jì)算平臺(tái)通?;?Arm Cortex 或 Intel Atom/Core 處理器。一般來(lái)說(shuō),I/O 和 CPU 規(guī)格(而不是 GPU)對(duì)于執(zhí)行數(shù)據(jù)收集任務(wù)更為重要。
人工智能模型需要在神經(jīng)網(wǎng)絡(luò)和資源密集型機(jī)器學(xué)習(xí)或深度學(xué)習(xí)算法上進(jìn)行訓(xùn)練,這些算法需要更強(qiáng)大的處理能力,例如強(qiáng)大的 GPU,以支持并行計(jì)算,以便分析大量收集和預(yù)處理的訓(xùn)練數(shù)據(jù)。訓(xùn)練 AI 模型涉及選擇機(jī)器學(xué)習(xí)模型并根據(jù)收集和預(yù)處理的數(shù)據(jù)對(duì)其進(jìn)行訓(xùn)練。在此過(guò)程中,還需要評(píng)估和調(diào)整參數(shù)以確保準(zhǔn)確性。許多訓(xùn)練模型和工具可供選擇,包括現(xiàn)成的深度學(xué)習(xí)設(shè)計(jì)框架,例如 PyTorch、TensorFlow 和 Caffe。訓(xùn)練通常在指定的 AI 訓(xùn)練機(jī)器或云計(jì)算服務(wù)上進(jìn)行,例如 AWS Deep Learning AMI、Amazon SageMaker Autopilot、Google Cloud AI、
階段涉及在邊緣計(jì)算機(jī)上部署經(jīng)過(guò)訓(xùn)練的 AI 模型,以便它可以根據(jù)新收集和預(yù)處理的數(shù)據(jù)快速有效地進(jìn)行推理和預(yù)測(cè)。由于推理階段通常比訓(xùn)練消耗更少的計(jì)算資源,因此 CPU 或輕量級(jí)加速器對(duì)于 AIoT 應(yīng)用程序可能就足夠了。
盡管如此,用戶(hù)將需要一個(gè)轉(zhuǎn)換工具來(lái)將訓(xùn)練好的模型轉(zhuǎn)換為在專(zhuān)用的邊緣處理器/加速器上運(yùn)行,例如英特爾 OpenVINO 或 NVIDIA CUDA。推理還包括幾個(gè)不同的邊緣計(jì)算級(jí)別和要求。
盡管 AI 訓(xùn)練仍主要在云端或本地服務(wù)器上進(jìn)行,但數(shù)據(jù)收集和推理必然發(fā)生在網(wǎng)絡(luò)邊緣。此外,由于推理是經(jīng)過(guò)訓(xùn)練的 AI 模型完成應(yīng)用程序目標(biāo)的大部分工作(即根據(jù)新收集的現(xiàn)場(chǎng)數(shù)據(jù)做出決策或執(zhí)行操作),因此用戶(hù)需要確定需要以下哪些級(jí)別的邊緣計(jì)算為了選擇合適的處理器。
在邊緣和云之間傳輸數(shù)據(jù)不僅昂貴,而且耗時(shí)并導(dǎo)致延遲。通過(guò)低邊緣計(jì)算,應(yīng)用程序只需將少量有用數(shù)據(jù)發(fā)送到云端,從而減少延遲時(shí)間、帶寬、數(shù)據(jù)傳輸費(fèi)用、功耗和硬件成本。無(wú)需加速器的基于 Arm 的平臺(tái)可用于 IIoT 設(shè)備來(lái)收集和分析數(shù)據(jù),以做出快速推斷或決策。
這種推理水平可以處理各種 IP 攝像頭流,用于計(jì)算機(jī)視覺(jué)或視頻分析,具有足夠的處理幀速率。中等邊緣計(jì)算包括基于 AI 模型和用例性能要求的廣泛數(shù)據(jù)復(fù)雜性,例如辦公室門(mén)禁系統(tǒng)與大規(guī)模公共監(jiān)控網(wǎng)絡(luò)的面部識(shí)別應(yīng)用程序。大多數(shù)工業(yè)邊緣計(jì)算應(yīng)用還需要考慮有限的功率預(yù)算或無(wú)風(fēng)扇設(shè)計(jì)以進(jìn)行散熱。在此級(jí)別上,可以使用高性能 CPU、入門(mén)級(jí) GPU 或 VPU。例如,英特爾酷睿 i7 系列 CPU 通過(guò) OpenVINO 工具套件和基于軟件的 AI/ML 加速器提供高效的計(jì)算機(jī)視覺(jué)解決方案,可以在邊緣執(zhí)行推理。
邊緣計(jì)算涉及為使用更復(fù)雜模式識(shí)別的人工智能專(zhuān)家系統(tǒng)處理更大量的數(shù)據(jù),例如公共安全系統(tǒng)中自動(dòng)視頻監(jiān)控的行為分析,以檢測(cè)安全事件或潛在威脅事件。高端計(jì)算級(jí)別推理通常使用加速器,包括高端 GPU、VPU、TPU 或 FPGA,它們消耗更多功率(200 W 或更多)并產(chǎn)生過(guò)多熱量。
由于必要的功耗和產(chǎn)生的熱量可能會(huì)超過(guò)網(wǎng)絡(luò)遠(yuǎn)端的限制,例如在行駛的火車(chē)上,因此高邊緣計(jì)算系統(tǒng)通常部署在近邊緣站點(diǎn)(例如火車(chē)站)以執(zhí)行任務(wù).
多種工具可用于各種硬件平臺(tái),以幫助加快應(yīng)用程序開(kāi)發(fā)過(guò)程或提高 AI 算法和機(jī)器學(xué)習(xí)的整體性能。
我司產(chǎn)品廣泛應(yīng)用于數(shù)控機(jī)械 冶金,石油天然氣,石油化工,
化工,造紙印刷,紡織印染,機(jī)械,電子制造,汽車(chē)制造,
塑膠機(jī)械,電力,水利,水處理/環(huán)保,市政工程,鍋爐供暖,能源,輸配電。
AI models need to be trained on advanced neural networks and resource-hungry machine learning or deep learning algorithms that demand more powerful processing capabilities, such as powerful GPUs, to support parallel computing in order to analyze large amounts of collected and preprocessed training data. Training an AI model involves selecting a machine learning model and training it on collected and preprocessed data. During this process, there is also a need to evaluate and tune the parameters to ensure accuracy. Many training models and tools are available to choose from, including off-the-shelf deep learning design frameworks such as PyTorch, TensorFlow, and Caffe. Training is usually performed on designated AI training machines or cloud computing services, such as AWS Deep Learning AMIs, Amazon SageMaker Autopilot, Google Cloud AI, or Azure Machine Learning, instead of in the field.
The final phase involves deploying the trained AI model on the edge computer so that it can make inferences and predictions based on newly collected and preprocessed data quickly and efficiently. Since the inferencing stage generally consumes fewer computing resources than training, a CPU or lightweight accelerator may be sufficient for the AIoT application.
Nonetheless, users will need a conversion tool to convert the trained model to run on specialized edge processors/accelerators, such as Intel OpenVINO or NVIDIA CUDA. Inferencing also includes several different edge computing levels and requirements.
Although AI training is still mainly performed in the cloud or on local servers, data collection and inferencing necessarily take place at the edge of the network. Moreover, since inferencing is where trained AI model does most of the work to accomplish the application objectives (i.e., make decisions or perform actions based on newly collected field data), users need to determine which of the following levels of edge computing are needed in order to choose the appropriate processor.
Transferring data between the edge and the cloud is not only expensive, but also time- consuming and results in latency. With low edge computing, applications only send a small amount of useful data to the cloud, which reduces lag time, bandwidth, data transmission fees, power consumption, and hardware costs. An Arm-based platform without accelerators can be used on IIoT devices to collect and analyze data to make quick inferences or decisions.
This level of inference can handle various IP camera streams for computer vision or video analytics with sufficient processing frame rates. Medium edge computing includes a wide range of data complexity based on the AI model and performance requirements of the use case, such as facial recognition applications for an office entry system versus a large-scale public surveillance network. Most industrial edge computing applications also need to factor in a limited power budget or fanless design for heat dissipation. It may be possible to use a high-performance CPU, entry-level GPU, or VPU at this level. For instance, the Intel Core i7 Series CPUs offer an efficient computer vision solution with the OpenVINO toolkit and software based AI/ML accelerators that can perform inference at the edge.
High edge computing involves processing heavier loads of data for AI expert systems that use more complex pattern recognition, such as behavior analysis for automated video surveillance in public security systems to detect security incidents or potentially threatening events. High Edge Compute Level inferencing generally uses accelerators, including a high-end GPU, VPU, TPU, or FPGA, which consumes more power (200 W or more) and generates excess heat.
Since the necessary power consumption and heat generated may exceed the limits at the far edge of the network, such as aboard a moving train, high edge computing systems are often deployed in near-edge sites, such as in a railway station, to perform tasks.
Several tools are available for various hardware platforms to help speed up the application development process or improve overall performance for AI algorithms and machine learning.
上一個(gè):A-B 81001-450-51-R 電源輸入模塊
下一個(gè):A-B 80190-640-22-R 電源輸入模塊
如果您有任何問(wèn)題,請(qǐng)跟我們聯(lián)系!
聯(lián)系我們
Copyright © 2002-2020 廈門(mén)雄霸電子商務(wù)有限公司 版權(quán)所有
備案號(hào):閩ICP備14012685號(hào)地址:廈門(mén)市思明區(qū)呂嶺路1733號(hào)萬(wàn)科創(chuàng)想中心2009室