国产成人秘 一区二区三区东京热,91精品视频在线,在线免费看黄网站,国产偷窥熟妇高潮呻吟,v无码播放精品成人乱色一区二区

網(wǎng)站導航

1336F-BRF20-AA-EN-GMS1-L6 Allen-Bradley羅克韋爾 伺服驅(qū)動軸模塊

當前位置:主頁 > 產(chǎn)品展示 > 羅克韋爾 Allen Bradley

1336F-BRF20-AA-EN-GMS1-L6 Allen-Bradley羅克韋爾 伺服驅(qū)動軸模塊

型號: 1336F-BRF20-AA-EN-GMS1-L6

分類: 羅克韋爾 Allen Bradley

聯(lián)系人:何經(jīng)理

手機:13313705507

QQ:2235954483

郵箱:2235954483@qq.com

地址:廈門市思明區(qū)呂嶺路1733號萬科創(chuàng)想中心2009室

詳細介紹

此階段的目標是獲取大量信息來訓練 AI 模型。僅原始的、未處理的數(shù)據(jù)是沒有幫助的,因為信息可能包含重復、錯誤和異常值。在初始階段預處理收集的數(shù)據(jù)以識別模式、異常值和缺失信息還允許用戶糾正錯誤和偏差。根據(jù)所收集數(shù)據(jù)的復雜程度,通常用于數(shù)據(jù)收集的計算平臺通常基于 Arm Cortex 或 Intel Atom/Core 處理器。一般來說,I/O 和 CPU 規(guī)格(而不是 GPU)對于執(zhí)行數(shù)據(jù)收集任務(wù)更為重要。

人工智能模型需要在神經(jīng)網(wǎng)絡(luò)和資源密集型機器學習或深度學習算法上進行訓練,這些算法需要更強大的處理能力,例如強大的 GPU,以支持并行計算,以便分析大量收集和預處理的訓練數(shù)據(jù)。訓練 AI 模型涉及選擇機器學習模型并根據(jù)收集和預處理的數(shù)據(jù)對其進行訓練。在此過程中,還需要評估和調(diào)整參數(shù)以確保準確性。許多訓練模型和工具可供選擇,包括現(xiàn)成的深度學習設(shè)計框架,例如 PyTorch、TensorFlow 和 Caffe。訓練通常在指定的 AI 訓練機器或云計算服務(wù)上進行,例如 AWS Deep Learning AMI、Amazon SageMaker Autopilot、Google Cloud AI、

1336F-BRF20-AA-EN-GMS1-L6 Allen-Bradley羅克韋爾 伺服驅(qū)動軸模塊

 

階段涉及在邊緣計算機上部署經(jīng)過訓練的 AI 模型,以便它可以根據(jù)新收集和預處理的數(shù)據(jù)快速有效地進行推理和預測。由于推理階段通常比訓練消耗更少的計算資源,因此 CPU 或輕量級加速器對于 AIoT 應(yīng)用程序可能就足夠了。

盡管如此,用戶將需要一個轉(zhuǎn)換工具來將訓練好的模型轉(zhuǎn)換為在專用的邊緣處理器/加速器上運行,例如英特爾 OpenVINO 或 NVIDIA CUDA。推理還包括幾個不同的邊緣計算級別和要求。

邊緣計算級別

盡管 AI 訓練仍主要在云端或本地服務(wù)器上進行,但數(shù)據(jù)收集和推理必然發(fā)生在網(wǎng)絡(luò)邊緣。此外,由于推理是經(jīng)過訓練的 AI 模型完成應(yīng)用程序目標的大部分工作(即根據(jù)新收集的現(xiàn)場數(shù)據(jù)做出決策或執(zhí)行操作),因此用戶需要確定需要以下哪些級別的邊緣計算為了選擇合適的處理器。

邊緣計算水平低

在邊緣和云之間傳輸數(shù)據(jù)不僅昂貴,而且耗時并導致延遲。通過低邊緣計算,應(yīng)用程序只需將少量有用數(shù)據(jù)發(fā)送到云端,從而減少延遲時間、帶寬、數(shù)據(jù)傳輸費用、功耗和硬件成本。無需加速器的基于 Arm 的平臺可用于 IIoT 設(shè)備來收集和分析數(shù)據(jù),以做出快速推斷或決策。

中等邊緣計算水平

這種推理水平可以處理各種 IP 攝像頭流,用于計算機視覺或視頻分析,具有足夠的處理幀速率。中等邊緣計算包括基于 AI 模型和用例性能要求的廣泛數(shù)據(jù)復雜性,例如辦公室門禁系統(tǒng)與大規(guī)模公共監(jiān)控網(wǎng)絡(luò)的面部識別應(yīng)用程序。大多數(shù)工業(yè)邊緣計算應(yīng)用還需要考慮有限的功率預算或無風扇設(shè)計以進行散熱。在此級別上,可以使用高性能 CPU、入門級 GPU 或 VPU。例如,英特爾酷睿 i7 系列 CPU 通過 OpenVINO 工具套件和基于軟件的 AI/ML 加速器提供高效的計算機視覺解決方案,可以在邊緣執(zhí)行推理。

高邊緣計算水平

邊緣計算涉及為使用更復雜模式識別的人工智能專家系統(tǒng)處理更大量的數(shù)據(jù),例如公共安全系統(tǒng)中自動視頻監(jiān)控的行為分析,以檢測安全事件或潛在威脅事件。高端計算級別推理通常使用加速器,包括高端 GPU、VPU、TPU 或 FPGA,它們消耗更多功率(200 W 或更多)并產(chǎn)生過多熱量。

由于必要的功耗和產(chǎn)生的熱量可能會超過網(wǎng)絡(luò)遠端的限制,例如在行駛的火車上,因此高邊緣計算系統(tǒng)通常部署在近邊緣站點(例如火車站)以執(zhí)行任務(wù).

多種工具可用于各種硬件平臺,以幫助加快應(yīng)用程序開發(fā)過程或提高 AI 算法和機器學習的整體性能。

1336F-BRF20-AA-EN-GMS1-L6 Allen-Bradley羅克韋爾 伺服驅(qū)動軸模塊


我司產(chǎn)品廣泛應(yīng)用于數(shù)控機械 冶金,石油天然氣,石油化工,
化工,造紙印刷,紡織印染,機械,電子制造,汽車制造,
塑膠機械,電力,水利,水處理/環(huán)保,市政工程,鍋爐供暖,能源,輸配電。

1336F-BRF20-AA-EN-GMS1-L6 Allen-Bradley羅克韋爾 伺服驅(qū)動軸模塊

 

AI models need to be trained on advanced neural networks and resource-hungry machine learning or deep learning algorithms that demand more powerful processing capabilities, such as powerful GPUs, to support parallel computing in order to analyze large amounts of collected and preprocessed training data. Training an AI model involves selecting a machine learning model and training it on collected and preprocessed data. During this process, there is also a need to evaluate and tune the parameters to ensure accuracy. Many training models and tools are available to choose from, including off-the-shelf deep learning design frameworks such as PyTorch, TensorFlow, and Caffe. Training is usually performed on designated AI training machines or cloud computing services, such as AWS Deep Learning AMIs, Amazon SageMaker Autopilot, Google Cloud AI, or Azure Machine Learning, instead of in the field.

Inferencing

The final phase involves deploying the trained AI model on the edge computer so that it can make inferences and predictions based on newly collected and preprocessed data quickly and efficiently. Since the inferencing stage generally consumes fewer computing resources than training, a CPU or lightweight accelerator may be sufficient for the AIoT application.

Nonetheless, users will need a conversion tool to convert the trained model to run on specialized edge processors/accelerators, such as Intel OpenVINO or NVIDIA CUDA. Inferencing also includes several different edge computing levels and requirements.

Edge computing levels

Although AI training is still mainly performed in the cloud or on local servers, data collection and inferencing necessarily take place at the edge of the network. Moreover, since inferencing is where trained AI model does most of the work to accomplish the application objectives (i.e., make decisions or perform actions based on newly collected field data), users need to determine which of the following levels of edge computing are needed in order to choose the appropriate processor.

Low edge computing level

Transferring data between the edge and the cloud is not only expensive, but also time- consuming and results in latency. With low edge computing, applications only send a small amount of useful data to the cloud, which reduces lag time, bandwidth, data transmission fees, power consumption, and hardware costs. An Arm-based platform without accelerators can be used on IIoT devices to collect and analyze data to make quick inferences or decisions.

Medium edge computing level

This level of inference can handle various IP camera streams for computer vision or video analytics with sufficient processing frame rates. Medium edge computing includes a wide range of data complexity based on the AI model and performance requirements of the use case, such as facial recognition applications for an office entry system versus a large-scale public surveillance network. Most industrial edge computing applications also need to factor in a limited power budget or fanless design for heat dissipation. It may be possible to use a high-performance CPU, entry-level GPU, or VPU at this level. For instance, the Intel Core i7 Series CPUs offer an efficient computer vision solution with the OpenVINO toolkit and software based AI/ML accelerators that can perform inference at the edge.

High edge computing level

High edge computing involves processing heavier loads of data for AI expert systems that use more complex pattern recognition, such as behavior analysis for automated video surveillance in public security systems to detect security incidents or potentially threatening events. High Edge Compute Level inferencing generally uses accelerators, including a high-end GPU, VPU, TPU, or FPGA, which consumes more power (200 W or more) and generates excess heat.

Since the necessary power consumption and heat generated may exceed the limits at the far edge of the network, such as aboard a moving train, high edge computing systems are often deployed in near-edge sites, such as in a railway station, to perform tasks.

Several tools are available for various hardware platforms to help speed up the application development process or improve overall performance for AI algorithms and machine learning.


推薦產(chǎn)品

如果您有任何問題,請跟我們聯(lián)系!

聯(lián)系我們

Copyright © 2002-2020 廈門雄霸電子商務(wù)有限公司 版權(quán)所有

閩公網(wǎng)安備 35020302034927號

備案號:閩ICP備14012685號

地址:廈門市思明區(qū)呂嶺路1733號萬科創(chuàng)想中心2009室

在線客服 聯(lián)系方式 二維碼

服務(wù)熱線

13313705507

掃一掃,關(guān)注我們