hpc

Results 1 - 25 of 87Sort Results By: Published Date | Title | Company Name
By: Dell EMC     Published Date: Nov 26, 2018
Die Dell EMC Technologie für die digitale Fertigung nutzt Workstation-, HPC- und Speicherfunktionen, die zu besseren Produkten sowie effizienteren Design- und Produktionsprozessen führen und sich an rasch ändernde Kundenpräferenzen anpassen. Das Erheben, Zusammenstellen und Verarbeiten von immer mehr Daten im gesamten Ökosystem, von der Produktmodellierung bis hin zu After- Sales-Trends, machen die digitale Fabrik zu einer wichtigen und erforderlichen Realität in der Fertigungslandschaft. Erfahren Sie mehr über Dell-Lösungen von Intel® Xeon® processors
Tags : 
     Dell EMC
By: Dell EMC     Published Date: Nov 26, 2018
La tecnologia Dell EMC per la produzione digitale sfrutta le funzionalità combinate di workstation, HPC e storage per offrire prodotti migliori e processi di progettazione e produzione più efficienti, nonché per soddisfare le preferenze in rapida evoluzione dei clienti. La raccolta, il confronto e la sintesi di una quantità di dati sempre maggiore nell'intero ecosistema, dalla modellazione dei prodotti alle tendenze post-vendita, stanno facendo della produzione digitale una realtà potente e necessaria nello scenario della produzione. Scopri di più sulle soluzioni Dell in collaborazione con Intel® Xeon® processors
Tags : 
     Dell EMC
By: Dell EMC     Published Date: Nov 26, 2018
Dell EMC technology for Digital Manufacturing harnesses the workstation, HPC and storage capabilities that combine to enable better products, more efficient design and production processes, and meet rapidly changing customer preferences. Collecting, collating and digesting more and more data in the entire ecosystem, from product modelling to after-sales trends, are making the digital factory a powerful and necessary reality in the manufacturing landscape. Learn more about Dell Precision® workstations featuring Intel® Xeon® processors
Tags : 
     Dell EMC
By: Dell EMC     Published Date: Nov 26, 2018
Dell EMC technology for Digital Manufacturing harnesses the workstation, HPC and storage capabilities that combine to enable better products, more efficient design and production processes, and meet rapidly changing customer preferences. Collecting, collating and digesting more and more data in the entire ecosystem, from product modelling to after-sales trends, are making the digital factory a powerful and necessary reality in the manufacturing landscape. Learn more about Dell Precision® workstations featuring Intel® Xeon® processors
Tags : 
     Dell EMC
By: IBM APAC     Published Date: Oct 16, 2018
Modern AI, HPC and analytics workloads are driving an ever-growing set of data intensive challenges that can only be met with accelerated infrastructure. Designed for the AI era and architected for the modern analytics and AI workloads, Power Systems AC922 delivers unprecedented speed for the AI Era like up to 5.6 times as much bandwidth, which results from The only architecture enabling NVLink between CPUs and GPUs a variety of next-generation I/O architectures: PCIe gen4, CAPI 2.0, OpenCAPI and NVLINK. Proven simplified deep-learning deployment and AI performance
Tags : 
     IBM APAC
By: Dell EMC     Published Date: Aug 17, 2018
Dell EMC technology for Digital Manufacturing harnesses the workstation, HPC and storage capabilities that combine to enable better products, more efficient design and production processes, and meet rapidly changing customer preferences. Collecting, collating and digesting more and more data in the entire ecosystem, from product modelling to after-sales trends, are making the digital factory a powerful and necessary reality in the manufacturing landscape.
Tags : 
     Dell EMC
By: Lenovo and Intel®     Published Date: Aug 02, 2018
Having the right, scalable, IT systems to handle large compute workloads, while tapping into massive data sets, is critical to a project's success and even the success of your business. HPC solutions, powered by Intel® technology, offer greater value potential by combining the density of blade computing with the economics of rack-based systems. Learn how businesses like yours are using HPC technology to expand IT infrastructure, improve development, accelerate overall innovation, and stay on budget. Get the white paper.
Tags : lenovo, intel, systems, technology
     Lenovo and Intel®
By: Lenovo and Intel®     Published Date: Aug 02, 2018
If you thought HPC solutions are only useful for research or academia, think again! Lenovo HPC solutions, powered by Intel® technology, can be specifically built and optimized for your business needs. They can help to accelerate innovation, whether it’s precisely modelling a new drug, driving simulations to improve manufacturing, improving the efficiency and success rate of explorations, achieving greater manufacturing efficiency, or gaining new insights into IoT data. This best-practice guide will help you evaluate and consider the best approach to adopt HPC for your business needs, as well as the solution components to be considered in its implementation. Get the eBook.
Tags : lenovo, hpc, intel, business, technology
     Lenovo and Intel®
By: IBM     Published Date: Nov 07, 2016
Is your software defined infrastructure (SDI) for high performance computing (HPC) and big data analytics meeting the needs of your growing business? Would you like to know how to justify the switching cost from unsupported open source software to a commercial grade SDI that ensures your resources are more effectively used cutting down time to market? This webcast will give you an overview of the true costs of building out and managing a HPC or Big Data environment and how commercial grade SDI software from IBM can provide a significant return on investment.
Tags : ibm, platform computing, software defined infrastructure, enterprise applications
     IBM
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: IBM     Published Date: Jul 19, 2016
Data movement and management is a major pain point for organizations operating HPC environments. Whether you are deploying a single cluster, or managing a diverse research facility, you should be taking a data centric approach. As data volumes grow and the cost of compute drops, managing data consumes more of the HPC budget and computational time. The need for Data Centric HPC architectures grows dramatically as research teams pool their resources to purchase more resources and improve overall utilization. Learn more in this white paper about the key considerations when expanding from traditional compute-centric to data-centric HPC.
Tags : ibm, analytics, hpc, big data, software development, enterprise applications, data management
     IBM
By: IBM     Published Date: Jul 19, 2016
IBM Platform LSF family provides a complete set of workload management capabilities for demanding, distributed HPC environments. In this video, we will learn how a genomics workflow can be managed in a multi-architecture, hybrid-cloud environment with the IBM Platform LSF family. Featuring IBM Platform Application Center and IBM Process Manager, learn how these add-on products can help to drive productivity through easy-to-use interfaces for managing complex computational workflow.
Tags : ibm, analytics, ibm platform, lsf, ibm platform application center, software development, enterprise applications, data management
     IBM
By: CyrusOne     Published Date: Jul 05, 2016
Many companies, especially those in the Oil and Gas Industry, need high-density deployments of high performance compute (HPC) environments to manage and analyze the extreme levels of computing involved with seismic processing. CyrusOne’s Houston West campus has the largest known concentration of HPC and high-density data center space in the colocation market today. The data center buildings at this campus are collectively known as the largest data center campus for seismic exploration computing in the oil and gas industry. By continuing to apply its Massively Modular design and build approach and high-density compute expertise, CyrusOne serves the growing number of oil and gas customers, as well as other customers, who are demanding best-in-class, mission-critical, HPC infrastructure. The company’s proven flexibility and scale of its HPC offering enables customers to deploy the ultra-high density compute infrastructure they need to be competitive in their respective business sectors.
Tags : environment, cyrusone, best practices, productivity, flexibility
     CyrusOne
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: TYAN     Published Date: Jun 06, 2016
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
Tags : 
     TYAN
By: Intel     Published Date: Apr 01, 2016
Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the highperformance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of.
Tags : 
     Intel
By: SGI     Published Date: Mar 25, 2016
This paper offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution. Three important questions are suggested that help determine the most appropriate HPC design (scale-up or scale out) that meets your goal and accelerates your discoveries.
Tags : 
     SGI
By: SGI     Published Date: Mar 25, 2016
This paper offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution. Three important questions are suggested that help determine the most appropriate HPC design (scale-up or scale out) that meets your goal and accelerates your discoveries.
Tags : 
     SGI
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: Dell and Intel®     Published Date: Nov 18, 2015
Unleash the extreme performance and scalability of the Lustre® parallel file system for high performance computing (HPC) workloads, including technical ‘big data’ applications common within today’s enterprises. The Dell Storage for HPC with Intel® Enterprise Edition (EE) for Lustre Solution allows end-users that need the benefits of large–scale, high bandwidth storage to tap the power and scalability of Lustre, with its simplified installation, configuration, and management features that are backed by Dell and Intel®.
Tags : 
     Dell and Intel®
By: SGI     Published Date: Nov 17, 2015
In the pantheon of HPC grand challenges, weather forecasting and long-term climate simulation rank right up there with the most complex and com- putationally demanding problems in astrophysics, aeronautics, fusion power, exotic materials,and earthquake prediction, to name just a few. Modern weather prediction requires cooperation in the collection of observed data and sharing of forecasts output among all nations, a collabora- tion that has been ongoing for decades. This data is used to simulate effects on a range of scales— from events, such as the path of tornados, that change from minute to minute and move over distances measured in meters, to turnover of water layers in the ocean, a process that is measured in decades or even hundreds of years, and spans thousands of miles. The amount of data collected is staggering. Hundreds of thousands of surface stations, including airborne radiosondes, ships and buoys, aircraft, and dozens of weather satellites, are streaming terabytes of i
Tags : 
     SGI
By: Dell EMC     Published Date: Nov 09, 2015
Download this white paper and learn how the Dell Hybrid HPC solution delivers a hybrid CPU and GPU compute environment with the PowerEdge C6320 and C4130 to: - Optimize workloads across CPU/GPU servers - Deliver the highest-density, highest-performance in a small footprint - Provide significant power, cooling and resource utilization benefits - Lower cost of ownership and enhance reliability through integrated Dell Remote Access Controller (iDRAC) and Lifecycle Controller
Tags : 
     Dell EMC
Start   Previous   1 2 3 4    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com