lab

Results 1 - 25 of 5349Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 27, 2015
This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.
Tags : 
     Seagate
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Numascale     Published Date: Nov 20, 2013
Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. One such system currently offers users over 1,700 cores with a 4.6 TB single memory image
Tags : 
     Numascale
By: Intel     Published Date: Aug 06, 2014
Purpose-built for use with the dynamic computing resources available from Amazon Web Services ™ the Intel Lustre* solution provides the fast, massively scalable storage software needed to accelerate performance, even on complex workloads. Intel is a driving force behind the development of Lustre, and committed to providing fast, scalable, and cost effective storage with added support and manageability. Intel ® Enterprise Edition for Lustre* software undation for dynamic AWS-based workloads. Now you can innovate on your problem, not your infrastructure
Tags : intel, cloud edition lustre software, scalable storage software
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
Impact analysis or drop testing is one of the most important stages of product design and development, and software that can simulate this testing accurately yields dramatic cost and time-to-market benefits for manufacturers. Dell, Intel and Altair have collaborated to analyze a virtual drop test solution with integrated simulation and optimization analysis, delivering proven gains in speed and accuracy. With this solution, engineers can explore more design alternatives for improved product robustness and reliability. As a result, manufacturers can significantly reduce the time to develop high-performing designs, improving product quality while minimizing time to delivery
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
With Cray and Altair, engineers have the computational systems they need to perform advanced subsea computational fluid dynamics (CFD) analysis with better speed, scalability and accuracy. With Altair’s AcuSolve CFD solver running on Cray® XC30™ supercomputer systems, operators and engineers responsible for riser system design and analysis can increase component life, reduce uncertainty and improve the overall safety of their ultra-deep-water systems while still meeting their demanding development schedule.
Tags : 
     Altair
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
IBM NeXtscale System is changing system design dynamics in the HPC marketplace with ultra-dense servers. Enterprises looking for raw computing power and throughput to handle technical computing, cloud, grid and analytics should evaluate this new ultra-dense, highly scalable systems design. Read the paper by Clabby Analytics to learn more.
Tags : 
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform Symphony is a high performance SOA grid server that optimizes application performance and resource sharing. Platform Symphony runs distributed application services on a scalable, shared, heterogeneous grid and accelerates a wide variety of parallel applications, quickly computing results while making optimal use of available infrastructure. Platform Symphony Developer Edition enables developers to rapidly develop and test applications without the need for a production grid. After applications are running in the Developer Edition, they are guaranteed to run at scale once published to a scaled-out Platform Symphony grid. Platform Symphony Developer Edition also enables developers to easily test and verify Hadoop MapReduce applications against IBM Platform Symphony. By leveraging IBM Platform's Symphony's proven, low-latency grid computing solution, more MapReduce jobs can run faster, frequently with less infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 02, 2014
Learn how GPFS accelerates data intensive work flows and lowers storage costs in Life Sciences, Energy Exploration, Government, Media & Entertainment and Financial Services by removing data related bottlenecks, simplifying data management at scale, empowering global collaboration, managing the full data life cycle cost effectively and ensuring end-to-end data availability, reliability, and integrity.
Tags : ibm, complete storage solution, gpfs
     IBM
By: IBM     Published Date: Sep 02, 2014
Advanced analytics strategies yield the greatest benefits in terms of improving patient and business outcomes when applied across the entire healthcare ecosystem. But the challenge of collaborating across organizational boundaries in order to share information and insights is daunting to many stakeholders.
Tags : ibm, ecosystem
     IBM
By: IBM     Published Date: Sep 02, 2014
This two year research initiative in collaboration with IBM focuses on key trends, best practices, challenges, and priorities in enterprise risk management and covers topics as diverse as culture, organizational structure, data, systems, and processes.
Tags : ibm, chartis, rick enabled enterprise
     IBM
By: IBM     Published Date: Sep 16, 2015
A fast, simple, scalable and complete storage solution for today’s data-intensive enterprise IBM Spectrum Scale is used extensively across industries worldwide. Spectrum Scale simplifies data management with integrated tools designed to help organizations manage petabytes of data and billions of files—as well as control the cost of managing these ever-growing data volumes.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Docker is a lightweight Linux container technology built on top of LXC (LinuX Containers) and cgroup (control groups), which offers many attractive benefits for HPC environments. Find out more about how IBM Platform LSF® and Docker have been integrated outside the core of Platform LSF with a real world example involving the application BWA (bio-bwa.sourceforge.net). This step-by-step white paper provides details on how to get started with the IBM Platform LSF and Docker integration which is available via open beta on Service Management Connect.
Tags : 
     IBM
By: SGI     Published Date: Nov 17, 2015
In the pantheon of HPC grand challenges, weather forecasting and long-term climate simulation rank right up there with the most complex and com- putationally demanding problems in astrophysics, aeronautics, fusion power, exotic materials,and earthquake prediction, to name just a few. Modern weather prediction requires cooperation in the collection of observed data and sharing of forecasts output among all nations, a collabora- tion that has been ongoing for decades. This data is used to simulate effects on a range of scales— from events, such as the path of tornados, that change from minute to minute and move over distances measured in meters, to turnover of water layers in the ocean, a process that is measured in decades or even hundreds of years, and spans thousands of miles. The amount of data collected is staggering. Hundreds of thousands of surface stations, including airborne radiosondes, ships and buoys, aircraft, and dozens of weather satellites, are streaming terabytes of i
Tags : 
     SGI
By: RYFT     Published Date: Apr 03, 2015
The new Ryft ONE platform is a scalable 1U device that addresses a major need in the fast-growing market for advanced analytics — avoiding I/O bottlenecks that can seriously impede analytics performance on today's hyperscale cluster systems. The Ryft ONE platform is designed for easy integration into existing cluster and other server environments, where it functions as a dedicated, high-performance analytics engine. IDC believes that the new Ryft ONE platform is well positioned to exploit the rapid growth we predict for the high-performance data analysis market.
Tags : ryft, ryft one platform, 1u deivce, advanced analytics, avoiding i/o bottlenecks, idc
     RYFT
By: IBM     Published Date: May 20, 2015
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Seagate     Published Date: Feb 02, 2015
An introduction to the inner workings of the world’s most scalable and popular open source HPC file system
Tags : 
     Seagate
By: Intel     Published Date: Apr 01, 2016
Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the highperformance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of.
Tags : 
     Intel
By: InsideHPC Special Report     Published Date: Sep 04, 2013
The recently launched Mellanox Connect-IB™ InfiniBand adapter introduced a novel high-performance and scalable architecture for high-performance clusters. The architecture was designed from the ground up to provide high performance and scalability for the largest supercomputers in the world, today and in the future.
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Nov 14, 2014
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com