Digital signal processing
Digital signal processing is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, among others. DSP can involve linear or nonlinear operations. Nonlinear signal processing is related to nonlinear system identification and can be implemented in the time and spatio-temporal domains; the application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.
DSP is applicable to static data. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter. Sampling is carried out in two stages and quantization. Discretization means that the signal is divided into equal intervals of time, each interval is represented by a single measurement of amplitude. Quantization means. Rounding real numbers to integers is an example; the Nyquist–Shannon sampling theorem states that a signal can be reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is significantly higher than twice the Nyquist frequency. Theoretical DSP analyses and derivations are performed on discrete-time signal models with no amplitude inaccuracies, "created" by the abstract process of sampling. Numerical methods require a quantized signal, such as those produced by an ADC; the processed result might be a set of statistics. But it is another quantized signal, converted back to analog form by a digital-to-analog converter.
In DSP, engineers study digital signals in one of the following domains: time domain, spatial domain, frequency domain, wavelet domains. They choose the domain in which to process a signal by making an informed assumption as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain representation; the most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters. Linear filters satisfy the superposition principle, i.e. if an input is a weighted linear combination of different signals, the output is a weighted linear combination of the corresponding output signals.
A causal filter uses only previous samples of the output signals. A non-causal filter can be changed into a causal filter by adding a delay to it. A time-invariant filter has constant properties over time. A stable filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An unstable filter can produce an output that grows without bounds, with bounded or zero input. A finite impulse response filter uses only the input signals, while an infinite impulse response filter uses both the input signal and previous samples of the output signal. FIR filters are always stable. A filter can be represented by a block diagram, which can be used to derive a sample processing algorithm to implement the filter with hardware instructions. A filter may be described as a difference equation, a collection of zeros and poles or an impulse response or step response; the output of a linear digital filter to any given input may be calculated by convolving the input signal with the impulse response.
Signals are converted from time or space domain to the frequency domain through use of the Fourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant the Fourier transform is converted to the power spectrum, the magnitude of each frequency component squared; the most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is called spectrum- or spectral analysis. Filtering in non-realtime work can be achieved in the frequency domain, applying the filter and converting back to the time domain; this can be an efficient implementation and can g
Digital elevation model
A digital elevation model is a 3D CG representation of a terrain's surface – of a planet, moon, or asteroid – created from a terrain's elevation data. A "global DEM" refers to a Discrete Global Grid. DEMs are used in geographic information systems, are the most common basis for digitally produced relief maps. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is required for flood or drainage modeling, land-use studies, geological applications, other applications, in planetary science. There is no universal usage of the terms digital elevation model, digital terrain model and digital surface model in scientific literature. In most cases the term digital surface model represents the earth's surface and includes all objects on it. In contrast to a DSM, the digital terrain model represents the bare ground surface without any objects like plants and buildings. DEM is used as a generic term for DSMs and DTMs, only representing height information without any further definition about the surface.
Other definitions equalise the terms DEM and DTM, equalise the terms DEM and DSM, define the DEM as a subset of the DTM, which represents other morphological elements, or define a DEM as a rectangular grid and a DTM as a three-dimensional model. Most of the data providers use the term DEM as a generic term for DTMs. All datasets which are captured with satellites, airplanes or other flying platforms are DSMs, it is possible to compute a DTM from high resolution DSM datasets with complex algorithms. In the following, the term DEM is used as a generic term for DTMs. A DEM can be represented as a vector-based triangular irregular network; the TIN DEM dataset is referred to as a primary DEM, whereas the Raster DEM is referred to as a secondary DEM. The DEM could be acquired through techniques such as photogrammetry, lidar, IfSAR, land surveying, etc.. DEMs are built using data collected using remote sensing techniques, but they may be built from land surveying; the digital elevation model itself consists of a matrix of numbers, but the data from a DEM is rendered in visual form to make it understandable to humans.
This visualization may be in the form of a contoured topographic map, or could use shading and false color assignment to render elevations as colors. Visualizations are sometime done as oblique views, reconstructing a synthetic visual image of the terrain as it would appear looking down at an angle. In these oblique visualizations, elevations are sometimes scaled using "vertical exaggeration" in order to make subtle elevation differences more noticeable; some scientists, object to vertical exaggeration as misleading the viewer about the true landscape. Mappers may prepare digital elevation models in a number of ways, but they use remote sensing rather than direct survey data. Older methods of generating DEMs involve interpolating digital contour maps that may have been produced by direct survey of the land surface; this method is still used in mountain areas. Note that contour line data or any other sampled elevation datasets are not DEMs, but may be considered digital terrain models. A DEM implies.
One powerful technique for generating digital elevation models is interferometric synthetic aperture radar where two passes of a radar satellite, or a single pass if the satellite is equipped with two antennas, collect sufficient data to generate a digital elevation map tens of kilometers on a side with a resolution of around ten meters. Other kinds of stereoscopic pairs can be employed using the digital image correlation method, where two optical images are acquired with different angles taken from the same pass of an airplane or an Earth Observation Satellite; the SPOT 1 satellite provided the first usable elevation data for a sizeable portion of the planet's landmass, using two-pass stereoscopic correlation. Further data were provided by the European Remote-Sensing Satellite using the same method, the Shuttle Radar Topography Mission using single-pass SAR and the Advanced Spaceborne Thermal Emission and Reflection Radiometer instrumentation on the Terra satellite using double-pass stereo pairs.
The HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs. A tool of increasing value in planetary science has been use of orbital altimetry used to make digital elevation map of planets. A primary tool for this is Laser altimetry. Planetary digital elevation maps made using laser altimetry include the Mars Orbiter Laser Altimeter mapping of Mars, the Lunar Orbital Laser Altimeter and Lunar Altimeter mapping of the Moon, the Mercury Laser Altimeter mapping of Mercury. Lidar Radar Stereo photogrammetry from aerial surveys Structure from motion / Multi-view stereo applied to aerial photography Block adjustment from optical satellite imagery Interferometry from radar data Real Time Kinematic GPS Topographic maps Theodolite or total station Doppler radar Focus variation Inertial surveys Surveying and
Graphics processing unit
A graphics processing unit is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers and game consoles. Modern GPUs are efficient at manipulating computer graphics and image processing, their parallel structure makes them more efficient than general-purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die; the term GPU has been used from at least the 1980s. It was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU", it was presented as a "single-chip processor with integrated transform, triangle setup/clipping, rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002.
Arcade system boards have been using specialized graphics chips since the 1970s. In early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. Fujitsu's MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight, Sea Wolf and Space Invaders; the Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. The Galaxian hardware was used during the golden age of arcade video games, by game companies such as Namco, Gremlin, Konami, Nichibutsu and Taito. In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor; the Atari 8-bit computers had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored.
6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC supported smooth vertical and horizontal scrolling independent of the CPU; the NEC µPD7220 was one of the first implementations of a graphics display controller as a single Large Scale Integration integrated circuit chip, enabling the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became one of the best known of; the Williams Electronics arcade games Robotron 2084, Joust and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps. In 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, area fill functions. Included is a coprocessor with its own primitive instruction set, capable of manipulating graphics hardware registers in sync with the video beam, or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first microprocessor with on-chip graphics capabilities.
It could run general-purpose code, but it had a graphics-oriented instruction set. In 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture Windows accelerator cards. In 1987, the IBM 8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware; the same year, Sharp released the X68000, which used a custom graphics chipset, powerful for a home computer at the time, with a 65,536 color palette and hardware support for sprites and multiple playfields serving as a development machine for Capcom's CP System arcade board. Fujitsu competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised.
The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, these coprocessors faded away from the PC market. Throughout the 1990s, 2D GUI acceleration continued to evolve; as manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, their DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later. In the early- and mid-1990s, real-time 3D graphics were becoming common in arcade and console games, which led to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, Sega Model 2, the fifth-generation video game consoles such as the Saturn, PlayStation and Nintendo 64.
Arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L years before appearing in consu
Jeffrey Scott Vitter is a U. S. computer scientist and academic administrator. Born in 1955 in New Orleans, Vitter has served in several senior higher education administration posts, he is a former chancellor of the University of Mississippi. He assumed the chancellor position on January 1, 2016, his formal investiture to the chancellorship took place on November 10, 2016, at the University of Mississippi's Oxford Campus. Vitter was raised in New Orleans, Louisiana, he earned a bachelor of science in mathematics with highest honors from the University of Notre Dame in 1977, a Ph. D. in computer science from Stanford University under the supervision of Donald Knuth in 1980 and a master of business administration from Duke University in 2002. Vitter was unanimously named as the 17th chancellor of the University of Mississippi by the Mississippi Board of Trustees of State Institutions of Higher Learning on October 29, 2015 and began duties on January 1, 2016. In November 2018, Vitter announced that he would step down as chancellor to become a regular faculty member, effective January 4, 2019.
With his leadership as chancellor, the university built momentum through a dynamic and inclusive strategic plan Flagship Forward, with initiatives including a $1 billion building program, multidisciplinary research networks of faculty called Flagship Constellations, annual Technology Summits, major community partnerships through the M Partner program, extended capacity and reach of the University of Mississippi Medical Center. The university instituted a forthright contextualization of southern symbols on campus and established an Office of Diversity and Community Engagement. In December 2018, the university's status as a Carnegie R1 research university was reaffirmed. Since 2010, Vitter was provost and executive vice chancellor and Roy A. Roberts Distinguished Professor at the University of Kansas in Lawrence, Kansas; as provost, Vitter was the chief academic and operations officer for the Lawrence and Edwards campuses. He co-chaired the development of the KU strategic plan Bold Aspirations and has overseen the creation of the first-ever university-wide KU Core curriculum, expansion of the Schools of Engineering and Business, boosting multidisciplinary research and funding around four strategic initiatives, major growth of technology commercialization and corporate partnerships, administrative reorganization and efficiency.
Vitter served at Texas A&M University in College Station, Texas as provost and executive vice president for academics from 2008 to 2009, leading the 48,000-student university in the development of the institution’s academic master plan and launching initiatives affecting faculty start-up allocations, multidisciplinary priorities and diversity. He oversaw A&M’s campus in Doha, Qatar. From 2002 to 2008, Vitter was the Frederick Hovde Dean of the College of Science at Purdue University in West Lafayette, where he led the development of two strategic plans, establishing a dual focus of excellence in core departments and in multidisciplinary collaborations, he oversaw net growth by 60 faculty members and launched the collaborative design of an innovative outcomes-based college curriculum. At Duke University in Durham, North Carolina from 1993 to 2002, Vitter held a distinguished professorship as the Gilbert and Edward Lehrman Professor, he chaired the Department of Computer Science for eight and a half years and led it to significant gains in ratings.
From 1980 to 1992, he progressed through the faculty ranks in the Department of Computer Science at Brown University in Providence, Rhode Island. He was awarded tenure in 1985 at the age of 29. Vitter spent sabbatical leaves at the Mathematical Sciences Research Institute in Berkeley, CA. Vitter is a computer scientist with over 300 books and conference publications on the design and mathematical analysis of algorithms dealing with big data and data science, his Google Scholar h-index is in the 70s, he is an ISI cited researcher. He helped establish the field of I/O algorithms as a rigorous area of active investigation, he has made fundamental contributions in databases. Vitter is a Fellow of the National Academy of Inventors, a Fellow of the American Association for the Advancement of Science, a Fulbright Scholar, a Fellow of the Association for Computing Machinery, a Fellow of the Institute of Electrical and Electronics Engineers, a John Simon Guggenheim Memorial Foundation Fellow, a National Science Foundation Presidential Young Investigator Awardee, a member of Phi Kappa Phi, Sigma Xi, Phi Beta Kappa.
Vitter serves or has served on several advisory boards and boards of directors, including at the NSF Computer and Information Science and Engineering directorate, the Center for Massive Data Algorithmics at Aarhus University, the School of Science and Engineering at Tulane University, the University of Mississippi Foundation, the University of Mississippi Research Foundation, Innovate Mississippi, the National Graphene Association, the Computing Research Association, the Personalized Learning Consortium of the Association of Public and Land-grant Univers
Magnetic-core memory was the predominant form of random-access computer memory for 20 years between about 1955 and 1975. Such memory is just called core memory, or, core. Core memory uses toroids of a hard magnetic material as transformer cores, where each wire threaded through the core serves as a transformer winding. Three or four wires pass through each core; each core stores one bit of information. A core can be magnetized in either the counter-clockwise direction; the value of the bit stored in a core is zero or one according to the direction of that core's magnetization. Electric current pulses in some of the wires through a core allow the direction of the magnetization in that core to be set in either direction, thus storing a one or a zero. Another wire through each core, the sense wire, is used to detect; the process of reading the core causes the core to be reset to a zero. This is called destructive readout; when not being read or written, the cores maintain the last value they had if the power is turned off.
Therefore they are a type of non-volatile memory. Using smaller cores and wires, the memory density of core increased, by the late 1960s a density of about 32 kilobits per cubic foot was typical. However, reaching this density required careful manufacture always carried out by hand in spite of repeated major efforts to automate the process; the cost declined over this period from about $1 per bit to about 1 cent per bit. The introduction of the first semiconductor memory chips, SRAM, in the late 1960s began to erode the market for core memory; the first successful DRAM, the Intel 1103, which arrived in quantity in 1972 at 1 cent per bit, marked the beginning of the end for core memory. Improvements in semiconductor manufacturing led to rapid increases in storage capacity and decreases in price per kilobyte, while the costs and specs of core memory changed little. Core memory was driven from the market between 1973 and 1978. Although core memory is obsolete, computer memory is still sometimes called "core" though it's made of semiconductors by people who had worked with machines having real core memory.
And the files that result from saving the entire contents of memory to disk for debugging purposes when a major error occurs are still called "core dumps". The basic concept of using the square hysteresis loop of certain magnetic materials as a storage or switching device was known from the earliest days of computer development. Much of this knowledge had developed due to an understanding of transformers, which allowed amplification and switch-like performance when built using certain materials; the stable switching behavior was well known in the electrical engineering field, its application in computer systems was immediate. For example, J. Presper Eckert and Jeffrey Chuan Chu had done some development work on the concept in 1945 at the Moore School during the ENIAC efforts. Frederick Viehe applied for various patents on the use of transformers for building digital logic circuits in place of relay logic beginning in 1947. A patent on a developed core system was granted in 1947, purchased by IBM in 1956.
This development was little-known and the mainstream development of core is associated with three independent teams. Substantial work in the field was carried out by the Shanghai-born American physicists An Wang and Way-Dong Woo, who created the pulse transfer controlling device in 1949; the name referred to the way that the magnetic field of the cores could be used to control the switching of current. Wang and Woo were working at Harvard University's Computation Laboratory at the time, the university was not interested in promoting inventions created in their labs. Wang was able to patent the system on his own; the MIT Whirlwind computer required a fast memory system for real-time aircraft tracking use. At first, Williams tubes—a storage system based on cathode ray tubes—were used, but these devices were always temperamental and unreliable. Several researchers in the late 1940s conceived the idea of using magnetic cores for computer memory, but Jay Forrester received the principal patent for his invention of the coincident core memory that enabled the 3D storage of information.
William Papian of Project Whirlwind cited one of these efforts, Harvard's "Static Magnetic Delay Line", in an internal memo. The first core memory of 32 x 32 x 16 bits was installed on Whirlwind in the summer of 1953. Papian described: "Magnetic-Core Storage has two big advantages: greater reliability with a consequent reduction in maintenance time devoted to storage; the Wang memory was complicated. As I recall, which may not be correct, it used two cores per binary bit and was a delay line that moved a bit forward. To the extent that I may have focused on it, the approach was not suitable for our purposes." He describes the invention and associated events, in 1975. Forrester has since observed, "It took us about seven years to convince the industry that random-access magnetic-core memory was the solution to a missing link in computer technology. We spent the following seven years in the patent courts convincing them that they had not all thought of it first."A third developer involved in the early development of core was Jan A. Rajchman at RCA.
Geographic information system
A geographic information system is a system designed to capture, manipulate, analyze and present spatial or geographic data. GIS applications are tools that allow users to create interactive queries, analyze spatial information, edit data in maps, present the results of all these operations. GIS sometimes refers to geographic information science, the science underlying geographic concepts and systems. GIS can refer to a number of different technologies, processes and methods, it is attached to many operations and has many applications related to engineering, management, transport/logistics, telecommunications, business. For that reason, GIS and location intelligence applications can be the foundation for many location-enabled services that rely on analysis and visualization. GIS can relate unrelated information by using location as the key index variable. Locations or extents in the Earth space–time may be recorded as dates/times of occurrence, x, y, z coordinates representing, longitude and elevation, respectively.
All Earth-based spatial–temporal location and extent references should be relatable to one another and to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry; the first known use of the term "geographic information system" was by Roger Tomlinson in the year 1968 in his paper "A Geographic Information System for Regional Planning". Tomlinson is acknowledged as the "father of GIS". One of the first applications of spatial analysis in epidemiology is the 1832 "Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine"; the French geographer Charles Picquet represented the 48 districts of the city of Paris by halftone color gradient according to the number of deaths by cholera per 1,000 inhabitants. In 1854 John Snow determined the source of a cholera outbreak in London by marking points on a map depicting where the cholera victims lived, connecting the cluster that he found with a nearby water source.
This was one of the earliest successful uses of a geographic methodology in epidemiology. While the basic elements of topography and theme existed in cartography, the John Snow map was unique, using cartographic methods not only to depict but to analyze clusters of geographically dependent phenomena; the early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman; this work was drawn on glass plates but plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was used for creating separate printing plates for each color.
While the use of layers much became one of the main typical features of a contemporary GIS, the photographic process just described is not considered to be a GIS in itself – as the maps were just images with no database to link them to. Two additional developments are notable in the early days of GIS: Ian McHarg's publication "Design with Nature" and its map overlay method and the introduction of a street network into the U. S. Census Bureau's DIME system. Computer hardware development spurred by nuclear weapon research led to general-purpose computer "mapping" applications by the early 1960s; the year 1960 saw the development of the world's first true operational GIS in Ottawa, Canada, by the federal Department of Forestry and Rural Development. Developed by Dr. Roger Tomlinson, it was called the Canada Geographic Information System and was used to store and manipulate data collected for the Canada Land Inventory – an effort to determine the land capability for rural Canada by mapping information about soils, recreation, waterfowl and land use at a scale of 1:50,000.
A rating classification factor was added to permit analysis. CGIS was an improvement over "computer mapping" applications as it provided capabilities for overlay and digitizing/scanning, it supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS" for his use of overlays in promoting the spatial analysis of convergent geographic data. CGIS built a large digital land resource database in Canada, it was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets; the CGIS was never available commercially. In 1964 Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design, where a number of important theoretical concepts in spatial data handling were developed, which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, ODYSSEY – that served as sources for subsequent commercial development—to universities, research centers and corporations worldwide.
By the late 1970s two public domain GIS systems were in development, by the early 1980s, M&S Computing (late
Parallel external memory
In computer science, a parallel external memory model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory model. In a similar way, it is the cache-aware analogy to the parallel random-access machine; the PEM model consists of a number of processors, together with their respective private caches and a shared main memory. The PEM model is a combination of the PRAM model; the PEM model is a computation model which consists of P processors and a two-level memory hierarchy. This memory hierarchy consists of a large external memory of P small internal memories; the processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache; the caches have a size M, partitioned in blocks of size B. The processors can only perform operations on data; the data can be transferred between the main memory and the cache in blocks of size B. The complexity measure of the PEM model is the I/O complexity, which determines the number of parallel blocks transfers between the main memory and the cache.
During a parallel block transfer each processor can transfer a block. So if P processors load parallelly a data block of size B form the main memory into their caches, it is considered as an I/O complexity of O not O. A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches. In the PEM model, there is no direct communication network between the P processors; the processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts occur. Like in the PRAM model, three different variations of this problem are considered: Concurrent Read Concurrent Write: The same block in main memory can be read and written by multiple processors concurrently. Concurrent Read Exclusive Write: The same block in main memory can be read by multiple processors concurrently. Only one processor can write to a block at a time. Exclusive Read Exclusive Write: The same block in main memory cannot be read or written by multiple processors concurrently.
Only one processor can access a block at a time. The following two algorithms solve the CREW and EREW problem if P ≤ B processors write to the same block simultaneously. A first approach is to serialize the write operations. Only one processor after the other writes to the block; this results in a total of P parallel block transfers. A second approach needs O parallel block an additional block for each processor; the main idea is to schedule the write operations in a binary tree fashion and combine the data into a single block. In the first round P processors combine their blocks into P / 2 blocks. P / 2 processors combine the P / 2 blocks into P / 4; this procedure is continued. Let M = be a vector of d-1 pivots sorted in increasing order. Let A be an unordered set of N elements. A d-way partition of A is a set A i ∩ A j = ∅ for 1 ≤ i < j ≤ d. A i is called the i-th bucket; the number of elements in A i is greater than m i − 1 and smaller than m i 2. In the following algorithm the input is partitioned into N/P-sized contiguous segments S 1...
S P in main memory. The processor i works on the segment S i; the multiway partitioning algorithm uses a PEM prefix sum algorithm to calculate the prefix sum with the optimal O ( N P B + log