one

Results 1 - 25 of 7271Sort Results By: Published Date | Title | Company Name
By: Numascale     Published Date: Nov 20, 2013
Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. One such system currently offers users over 1,700 cores with a 4.6 TB single memory image
Tags : 
     Numascale
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
Impact analysis or drop testing is one of the most important stages of product design and development, and software that can simulate this testing accurately yields dramatic cost and time-to-market benefits for manufacturers. Dell, Intel and Altair have collaborated to analyze a virtual drop test solution with integrated simulation and optimization analysis, delivering proven gains in speed and accuracy. With this solution, engineers can explore more design alternatives for improved product robustness and reliability. As a result, manufacturers can significantly reduce the time to develop high-performing designs, improving product quality while minimizing time to delivery
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
With Cray and Altair, engineers have the computational systems they need to perform advanced subsea computational fluid dynamics (CFD) analysis with better speed, scalability and accuracy. With Altair’s AcuSolve CFD solver running on Cray® XC30™ supercomputer systems, operators and engineers responsible for riser system design and analysis can increase component life, reduce uncertainty and improve the overall safety of their ultra-deep-water systems while still meeting their demanding development schedule.
Tags : 
     Altair
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
While many storage vendors have submitted OpenStack drivers for their equipment, IBM has gone much further than that to support the OpenStack community. Read this paper and learn how.
Tags : ibm
     IBM
By: IBM     Published Date: Aug 27, 2014
Do more with less and reduce a range of expenses with grid and cluster scheduling and management software
Tags : ibm, platform computing, save money
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: RYFT     Published Date: Apr 03, 2015
The new Ryft ONE platform is a scalable 1U device that addresses a major need in the fast-growing market for advanced analytics — avoiding I/O bottlenecks that can seriously impede analytics performance on today's hyperscale cluster systems. The Ryft ONE platform is designed for easy integration into existing cluster and other server environments, where it functions as a dedicated, high-performance analytics engine. IDC believes that the new Ryft ONE platform is well positioned to exploit the rapid growth we predict for the high-performance data analysis market.
Tags : ryft, ryft one platform, 1u deivce, advanced analytics, avoiding i/o bottlenecks, idc
     RYFT
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: SGI     Published Date: Jan 07, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: SGI     Published Date: Feb 26, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: IBM     Published Date: Nov 14, 2014
A high performance computing (HPC) cluster refers to a group of servers built from off-the-shelf components that are connected via certain interconnect technologies. A cluster can deliver aggregated computing power from its many processors with many cores — sometimes hundreds, even thousands — to meet the processing demands of more complex engineering software, and therefore deliver results faster than individual workstations. If your company is in the majority that could benefit from access to more computing power, a cluster comprised of commodity servers may be a viable solution to consider, especially now that they’re easier to purchase, deploy, configure and maintain than ever before. Read more and learn about the '5 Easy Steps to a High Performance Cluster'.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Nov 18, 2015
The NCSA Private Sector Program creates a high-performance computing cluster to help corporations overcome critical challenges. Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage highperformance computing in ways that make them more competitive.”
Tags : 
     Dell and Intel®
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: IBM     Published Date: Sep 12, 2015
While many storage vendors have submitted OpenStack drivers for their equipment, IBM has gone much further than that to support the OpenStack community. Read this paper and learn how.
Tags : 
     IBM
By: Spectrum Enterprise     Published Date: Oct 10, 2018
Migrate your organization’s voice service to our cloud with Hosted Voice, a fully managed, comprehensive and customized solution with unified communication and collaboration features. Our solutions include full technical support and best-in-class IP based phones.
Tags : 
     Spectrum Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
This book helps you understand both sides of the hybrid IT equation and how HPE can help your organization transform its IT operations and save time and money in the process. I delve into the worlds of security, economics, and operations to show you new ways to support your business workloads.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Most companies moving into the public cloud today are making strategic decisions about which applications should go to the cloud and which should stay on-premises. Get acquainted with hybrid cloud management strategies and solutions, and learn what critical components must be addressed as you plan your hybrid cloud environment.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Powerful IT doesn’t have to be complicated. Hyperconvergence puts your entire virtualized infrastructure and advanced data services into one integrated powerhouse. Deploy HCI on an intelligent fabric that can scale with your business and you can hyperconverge the entire IT stack. This guide will help you: Understand the basic tenets of hyperconvergence and the software-defined data center; Solve for common virtualization roadblocks; Identify 3 things modern organizations want from IT; Apply 7 hyperconverged tactics to your existing infrastructure now.
Tags : 
     Hewlett Packard Enterprise
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com