The Cray XE6 is an enhanced version of the Cray XT6 supercomputer announced on 25 May 2010. The XE6 uses the same computer blade found in the XT6, with eight- or 12-core Opteron 6100 processors giving up to 3,072 cores per cabinet, but replaces the SeaStar2+ interconnect router used in the Cray XT5 and XT6 with the faster and more scalable Gemini router ASIC; this is used to provide a 3-dimensional torus network topology between nodes. Each XE6 node has either 32 or 64 GB of DDR3 SDRAM memory. Two nodes share one Gemini router ASIC; the XE6 runs the Cray Linux Environment version 3. This incorporates Cray's Compute Node Linux. Cray XE6 product page
The Cray J90 series was an air-cooled vector processor supercomputer first sold by Cray Research in 1994. The J90 evolved from the Cray Y-MP EL minisupercomputer, is compatible with Y-MP software, running the same UNICOS operating system; the J90 supported up to 32 CMOS processors with a 10 ns clock. It supported up to 4 GB of main memory and up to 48 GB/s of memory bandwidth, giving it less performance than the contemporary Cray T90, but making it a strong competitor to other technical computers in its price range. All input/output in a J90 system was handled by an IOS called IOS Model V; the IOS-V was based on the VME64 bus and SPARC I/O processors running the VxWorks RTOS. The IOS was programmed to emulate the IOS Model E, used in the larger Cray Y-MP systems, in order to minimize changes in the UNICOS operating system. By using standard VME boards, a wide variety of commodity peripherals could be used; the J90 was available in three basic configurations, the J98 with up to eight processors, the J916 with up to 16 processors, the J932 with up to 32 processors.
Each J90 processor was composed of two chips - one for the scalar portion of the processor, the other for the vector portion. The scalar chip was notable for including a small data cache to enhance scalar performance. In 1997 the J90se series became available, which doubled the scalar speed of the processors to 200 MHz. Support was added for the GigaRing I/O system found on the Cray T3E and Cray SV1, replacing IOS-V. SV1 processors could be installed in a J90 or J90se, further increasing performance within the same frame. Fred Gannett's Cray FAQ J90 at top500.org
The Cray X1 is a non-uniform memory access, vector processor supercomputer manufactured and sold by Cray Inc. since 2003. The X1 is described as the unification of the Cray T90, Cray SV1, Cray T3E architectures into a single machine; the X1 shares the multistreaming processors, vector caches, CMOS design of the SV1, the scalable distributed memory design of the T3E, the high memory bandwidth and liquid cooling of the T90. The X1 uses a 1.2 ns clock cycle, 8-wide vector pipes in MSP mode, offering a peak speed of 12.8 gigaflops per processor. Air-cooled models are available with up to 64 processors. Liquid-cooled systems scale to a theoretical maximum of 4096 processors, comprising 1024 shared-memory nodes connected in a two-dimensional torus network, in 32 frames; such a system would supply a peak speed of 50 teraflops. The largest unclassified X1 system was the 512 processor system at Oak Ridge National Laboratory, though this has since been upgraded to an X1E system; the X1 can be programmed either with used message passing software like MPI and PVM, or with shared-memory languages like Unified Parallel C programming language or Co-array Fortran.
The X1 runs an operating system called UNICOS/mp which shares more with the SGI IRIX operating system than it does with the UNICOS found on prior generation Cray machines. In 2005, Cray released the X1E upgrade, which uses dual-core processors, allowing two quad-processor nodes to fit on a node board; the processors are upgraded to 1150 MHz. This upgrade triples the peak performance per board, but reduces the per-processor memory and interconnect bandwidth. X1 and X1E boards can be combined within the same system; the X1 is notable for its development being funded by United States Government's National Security Agency. The X1 was not a financially successful product and it seems doubtful that it or its successors would have been produced without this support. ORNL X1 evaluation Cray Legacy Products Cray X1E at top500.org
The T3D was Cray Research's first attempt at a massively parallel supercomputer architecture. Launched in 1993, it marked Cray's first use of another company's microprocessor; the T3D consisted of between 32 and 2048 Processing Elements, each comprising a 150 MHz DEC Alpha 21064 microprocessor and either 16 or 64 MB of DRAM. PEs were grouped in nodes, which incorporated a 6-way processor interconnect switch; these switches had a peak bandwidth of 300 MB/second in each direction and were connected to form a three-dimensional torus network topology. The T3D was designed to be hosted by a Cray Y-MP Model E, M90 or C90-series "front-end" system and rely on it and its UNICOS operating system for all I/O and most system services; the T3D PEs ran a simple microkernel called UNICOS MAX. Several different configurations of T3D were available; the SC models shared a cabinet with a host Y-MP system and were available with either 128 or 256 PEs. The MC models were housed in one or more liquid-cooled cabinet separately from the host, while the MCA models were smaller air-cooled multi-cabinet configurations.
There was a liquid-cooled MCN model which had an alternative interconnect wiremat allowing non-power-of-2 numbers of PEs. The Cray T3D MC cabinet had an Apple Macintosh PowerBook laptop built into its front, its only purpose was to display animated Cray T3D logos on its color LCD screen. The first T3D delivered was a prototype installed at the Pittsburgh Supercomputing Center in early September 1993; the supercomputer was formally introduced on 27 September 1993. The T3D was superseded in 1995 by the faster and more sophisticated Cray T3E. CRAY T3D System Architecture Overview Manual
The Cray XT3 is a distributed memory massively parallel MIMD supercomputer designed by Cray Inc. with Sandia National Laboratories under the codename Red Storm. Cray turned the design into a commercial product in 2004; the XT3 derives much of its architecture from the previous Cray T3E system, from the Intel ASCI Red supercomputer. The XT3 consists of between 192 and 32,768 processing elements, where each PE comprises a 2.4 or 2.6 GHz AMD Opteron processor with up to two cores, a custom "SeaStar" communications chip, between 1 and 8 GB of RAM. The PowerPC 440 based SeaStar device provides a 6.4 gigabyte per second connection to the processor across HyperTransport, as well as six 8-gigabyte per second links to neighboring PEs. The PEs are arranged with 96 PEs in each cabinet; the XT3 runs an operating system called UNICOS/lc that partitions the machine into three sections, the largest comprising the Compute nodes, two smaller sections for Service nodes and IO nodes. In UNICOS/lc 1.x, the Compute PEs run a Sandia developed microkernel called Catamount, descended from the SUNMOS OS of the Intel Paragon.
Service and IO PEs run the full version of SuSE Linux and are used for interactive logins, systems management, application compiling and job launch. I/O PEs use physically distinct hardware, in that the node boards include PCI-X slots for connections to Ethernet and Fibre Channel networks. Though the performance of each XT3 model will vary with the speed and number of processors installed, the November 2007 Top500 results for the Red Storm machine, the largest XT3 machine installed at Sandia, measured 102.7 teraflops on the Linpack benchmark, placing it at #6 on the list. After upgrades in 2008 to install some XT4 nodes with quad-core Opterons, Red Storm achieved 248 teraflops to place at #9 on the November 2008 Top500; the architecture was superseded in 2006 by the Cray XT4. XT3 at top500.org Fireworks with new products at Cray - the XT3, November 2004 Cray XT3 Datasheet
The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets; the XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology. The Pawsey Supercomputing Centre has a 35,712-core XC40 called "Magnus" for general science research; this supercomputer has a processing power of 1.097 petaflops. The Bureau of Meteorology has a 51,840 core XC40 called "Australis" with 276TB of RAM and a usable storage of 4.3PB. The supercomputer with a peak performance of 1.6 petaflops provides the operational computing capability for weather, climate and wave numerical prediction and simulation. National IT center for science CSC computer "Sisu" was completed as XC40 in 2014, it has 40,512 cores with overall peak performance of 1,688 TFlops.
High Performance Computing Center, Stuttgart has built a 185,088-core XC40 named "Hazel Hen" with a peak performance of 7420 TFlops. Supercomputer Education and Research Centre at the Indian Institute of Science has an XC40 supercomputer named SahasraT, with 1,376 compute nodes, together with Intel Xeon Phi and NVIDIA K40 GPU accelerators. Pratyush and Mihir are the supercomputers established at Indian Institute of Tropical Meteorology and National Center for Medium Range Weather Forecast respectively. Pratyush and Mihir are two High Performance Computing units, they are located at two government institutes,one being 4.0 PetaFlops unit at IITM, Pune and another 2.8 PetaFlops unit at the National Centre for Medium Range Weather Forecasting, Noida. Both units and provides a combined output of 6.8 PetaFlops. The Center for Computational Astrophysics at the National Astronomical Observatory of Japan have a XC40 system named "ATERUI"; this is an upgrade from a previous Cray XC30 system. Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw has an XC40 supercomputer named Okeanos with 1084 compute nodes with 128 GB of RAM each.
King Abdullah University of Science and Technology has an XC40 named Shaheen. The processing power is 5.54 petaflops with 196,608 cores. Royal Institute of Technology has a 53,632-core XC40 called "Beskow"; the Swiss National Supercomputing Centre in Lugano had a system in 2013 named Piz Dora, a Cray XC40 with 1256 compute nodes. This has been combined with the old Piz Daint system into the new Cray XC50 Piz Daint; the UK Met Office has 3 XC40s, with a total of capable of 14 petaflops peak. It is the fastest machine in the world dedicated to weather and climate modeling. and was the 11th fastest on the TOP500 list when it was installed in June 2017. The United States Army Research Laboratory has an XC40 supercomputer called "Excalibur"; this computer has 100,064 cores. The Lawrence Berkeley National Laboratory has a XC40 supercomputer called "Cori" with 76,416 Intel Haswell cores and 658,784 Xeon Phi Knights Landing cores. Petroleum Geo-Services has an XC40 supercomputer used for the processing of complex seismic data sets.
The Bowie State University has an XC40 supercomputer called "Sphinx". This computer has 12,740 processing cores. Cray XC Series Supercomputers
United States Naval Research Laboratory
The United States Naval Research Laboratory is the corporate research laboratory for the United States Navy and the United States Marine Corps. It conducts applied research, technological development and prototyping; the laboratory's specialties include plasma physics, space physics, materials science, tactical electronic warfare. NRL is one of the first US Government scientific R&D laboratories, having opened in 1923 at the instigation of Thomas Edison, is under the Office of Naval Research. NRL's research expenditures are $1 billion per year; the Naval Research Laboratory conducts a variety of basic and scientific research and technological development of importance to the Navy. It has a history of scientific breakthroughs and technological achievements dating back to its foundation in 1923. In some instances the laboratory's contributions to military technology have been declassified decades after those technologies have become adopted. In 2011, NRL researchers published 1,398 unclassified scientific & technical articles, book chapters and conference proceedings.
In 2008, the NRL was ranked No. 3 among all U. S. institutions holding nanotechnology-related patents, behind IBM and the University of California. Current areas of research at NRL include: Advanced radio and infrared sensors Autonomous systems Computer science and artificial intelligence Directed energy technology Electronic electro-optical device technology Electronic warfare Enhanced maintainability and survivability technology Environmental effects on naval systems Imaging research and systems Information technology Marine geosciences Materials Meteorology Ocean acoustics Oceanography Space systems and technology Surveillance and sensor technology Undersea technologyIn 2014, the NRL was researching: armor for munitions in transport, high-powered lasers, remote explosives detection, the dynamics of explosive gas mixtures, electromagnetic Railgun technology, detection of hidden nuclear materials, graphene devices, high-power high frequency amplifiers, acoustic lensing, information-rich orbital coastline mapping, arctic weather forecasting, global aerosol analysis & prediction, high-density plasmas, Millisecond pulsars, broadband laser data links, virtual mission operation centers, battery technology, photonic crystals, carbon nanotube electronics, electronic sensors, mechanical nano-resonators, solid-state chemical sensors, organic opto-electronics, neural-electronic interfaces and self-assembling nanostructures.
The laboratory includes a range of R&D facilities. 2014 additions included the NRL Nanoscience Institute's 5,000 sq ft Class 100 nanofabrication cleanroom. The Naval Research Laboratory has a long history of spacecraft development; this includes the second and seventh American satellites in Earth orbit, the first solar-powered satellite, the first surveillance satellite, the first meteorological satellite and the first GPS satellite. Project Vanguard, the first American satellite program, tasked NRL with the design and launch of an artificial satellite, accomplished in 1958; as of 2013, Vanguard I and its upper launch stage are still in orbit, making them the longest-lived man-made satellites. Vanguard II was the first satellite to observe the Earth's cloud cover and therefore the first meteorological satellite. NRL's Galactic Radiation and Background I was the first U. S. intelligence satellite, mapping out Soviet radar networks from space. The Global Positioning System was tested by NRL's Timation series of satellites.
The first operational GPS satellite, Timation IV was designed and constructed at NRL. NRL pioneered the study of the sun Ultraviolet and X-Ray spectrum and continues to contribute to the field with satellites like Coriolis launched in 2003. NRL is responsible for the Tactical Satellite Program with spacecraft launched in 2006, 2009 and 2011; the NRL designed the first satellite tracking system, which became the prototype for future satellite tracking networks. Prior to the success of surveillance satellites, the iconic parabolic antenna atop NRL's main headquarters in Washington, D. C. was part of Communication Moon Relay, a project that utilized signals bounced off the Moon both for long-distance communications research and surveillance of internal Soviet transmissions during the Cold War. NRL's spacecraft development program continues today with the TacSat-4 experimental tactical reconnaissance and communication satellite. In addition to spacecraft design, NRL designs and operates spaceborne research instruments and experiments, such as the Strontium Iodide Radiation Instrumentation and RAM Angle and Magnetic field sensor aboard STPSat-5, the Wide-field imager for solar probe aboard the Parker Solar Probe, the Large Angle and Spectrometric Coronagraph Experiment aboard the Solar and Heliospheric Observatory.
NASA's Fermi Gamma-ray Space Telescope was tested at NRL spacecraft testing facilities. NRL scientists have most contributed leading research to the study of novas and gamma ray bursts; the Marine Meteorology Division, located in Monterey, contributes to weather forecasting in the United States and around the world by publishing imagery from 18 weather satellites. Satellite images of severe weather that are used for advanced warning originate from NRL–MRY, as seen in 2017 during hurricane Harvey. NRL is involved in weather forecasting models such as the