「
Non-uniform Memory Entry
」を編集中
ナビゲーションに移動
検索に移動
警告:
ログインしていません。編集を行うと、あなたの IP アドレスが公開されます。
ログイン
または
アカウントを作成
すれば、あなたの編集はその利用者名とともに表示されるほか、その他の利点もあります。
スパム攻撃防止用のチェックです。 けっして、ここには、値の入力は
しない
でください!
<br>Non-uniform memory access (NUMA) is a pc memory design utilized in multiprocessing, the place the memory entry time is determined by the memory location relative to the processor. Below NUMA, a processor can access its own native memory faster than non-native memory ([https://www.wakewiki.de/index.php?title=Benutzer:DemiVonwiller3 Memory Wave System] local to a different processor or memory shared between processors). NUMA is useful for workloads with excessive memory locality of reference and low lock contention, as a result of a processor could operate on a subset of memory principally or [https://www.littlemissmomma.com/2012/05/diy-painted-mason-jar-vases.html Memory Wave System] entirely inside its personal cache node, lowering traffic on the memory bus. NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. They have been developed commercially throughout the nineteen nineties by Unisys, Convex Computer (later Hewlett-Packard), Honeywell Info Methods Italy (HISI) (later Groupe Bull), Silicon Graphics (later Silicon Graphics International), Sequent Pc Programs (later IBM), Information Basic (later EMC, now Dell Applied sciences), Digital (later Compaq, then HP, now HPE) and ICL. Methods developed by these companies later featured in a variety of Unix-like operating systems, and to an extent in Windows NT.<br><br><br><br>Symmetrical Multi Processing XPS-a hundred household of servers, designed by Dan Gielan of Huge Company for Honeywell Data Techniques Italy. Fashionable CPUs operate considerably sooner than the primary memory they use. In the early days of computing and data processing, the CPU typically ran slower than its own memory. The performance lines of processors and memory crossed within the 1960s with the advent of the first supercomputers. Since then, CPUs increasingly have found themselves "starved for knowledge" and having to stall while waiting for information to arrive from memory (e.g. for Von-Neumann architecture-based computer systems, see Von Neumann bottleneck). Many supercomputer designs of the 1980s and 1990s centered on providing high-speed memory access as opposed to faster processors, permitting the computer systems to work on massive knowledge sets at speeds other techniques could not strategy. Limiting the number of memory accesses offered the important thing to extracting high efficiency from a trendy computer. For commodity processors, this meant installing an ever-rising amount of excessive-pace cache memory and utilizing increasingly refined algorithms to avoid cache misses.<br><br><br><br>However the dramatic increase in measurement of the operating methods and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems with out NUMA make the problem considerably worse. Now a system can starve a number of processors at the identical time, notably because just one processor can entry the computer's memory at a time. [http://dig.ccmixter.org/search?searchp=NUMA%20attempts NUMA attempts] to handle this drawback by providing separate memory for each processor, avoiding the efficiency hit when several processors try to deal with the same memory. For issues involving unfold data (frequent for servers and related functions), NUMA can improve the efficiency over a single shared memory by an element of [https://www.paramuspost.com/search.php?query=roughly&type=all&mode=search&results=25 roughly] the number of processors (or separate memory banks). Another strategy to addressing this downside is the multi-channel memory architecture, during which a linear improve in the number of memory channels will increase the memory access concurrency linearly. After all, not all information ends up confined to a single job, which means that more than one processor could require the identical knowledge.<br><br><br><br>To handle these circumstances, NUMA methods embody additional hardware or software to maneuver knowledge between memory banks. This operation slows the processors connected to those banks, so the general velocity improve as a consequence of NUMA heavily is determined by the character of the working duties. AMD carried out NUMA with its Opteron processor (2003), using HyperTransport. Intel announced NUMA compatibility for its x86 and Itanium servers in late 2007 with its Nehalem and Tukwila CPUs. Nearly all CPU architectures use a small quantity of very quick non-shared memory often called cache to use locality of reference in memory accesses. With NUMA, sustaining cache coherence across shared memory has a big overhead. Though less complicated to design and construct, non-cache-coherent NUMA methods turn out to be prohibitively complex to program in the standard von Neumann architecture programming model. Usually, ccNUMA makes use of inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location.<br>
編集内容の要約:
鈴木広大への投稿はすべて、他の投稿者によって編集、変更、除去される場合があります。 自分が書いたものが他の人に容赦なく編集されるのを望まない場合は、ここに投稿しないでください。
また、投稿するのは、自分で書いたものか、パブリック ドメインまたはそれに類するフリーな資料からの複製であることを約束してください(詳細は
鈴木広大:著作権
を参照)。
著作権保護されている作品は、許諾なしに投稿しないでください!
編集を中止
編集の仕方
(新しいウィンドウで開きます)
案内メニュー
個人用ツール
ログインしていません
トーク
投稿記録
アカウント作成
ログイン
名前空間
ページ
議論
日本語
表示
閲覧
編集
履歴表示
その他
検索
案内
メインページ
最近の更新
おまかせ表示
MediaWikiについてのヘルプ
ツール
リンク元
関連ページの更新状況
特別ページ
ページ情報