apa

Results 1 - 25 of 3243Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Jun 05, 2014
High-performance computing (HPC) continues to transform capabilities of organizations across a range of industries, whether the Human Genome Project or aerodynamics testing for race cars, we demonstrate how IBM Platform Computing solutions offer effective ways to unleash the power of HPC.
Tags : ibm, hpc
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 02, 2014
Comparing IBM performance against competitive offerings to help organizations deploy workloads with confidence IBM Platform Computing Cloud Service lets users economically add computing capacity by accessing ready-to-use clusters in the cloud-delivering high performance that compares favorably to cloud offerings from other providers. Tests show that the IBM service delivers the best (or ties for the best) absolute performance in all test categories. Learn More.
Tags : ibm, platform computing cloud services, benchmarking performance
     IBM
By: IBM     Published Date: Sep 02, 2014
This brief webcast will cover the new and enhanced capabilities of Elastic Storage 4.1, including native encryption and secure erase, flash-accelerated performance, network performance monitoring, global data sharing, NFS data migration and more. IBM GPFS (Elastic storage) may be the key to improving your organization's effectiveness and can help define a clear data management strategy for future data growth and support.
Tags : ibm, elastic storage
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: Bull     Published Date: Dec 04, 2014
Bull, an Atos company, is a leader in Big Data, HPC and cyber-security with a worldwide market presence. Bull has extensive experience in implementing and running petaflopsscale supercomputers. The exascale program is a new step forward in Bull’s strategy to deliver exascale supercomputers capable of addressing the new challenges of science, industry and society.
Tags : bull, exascale, big data, hpc, cyber security, supercomputers
     Bull
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Nov 14, 2014
IBM® has created a proprietary implementation of the open-source Hadoop MapReduce run-time that leverages the IBM Platform™ Symphony distributed computing middleware while maintaining application-level compatibility with Apache Hadoop.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This major Hollywood studio wanted to improve the computer time required to render animated films. Using HPC solution powered by Platform LSF increased compute capacity allowing release of two major feature films and multiple animated shorts.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: Bright Computing     Published Date: Nov 29, 2014
Delivering a cost-effective HPC solution to Cisco Unified Computing System (UCS) customers has become easier thanks to Bright Computing. Bright Cluster Manager is a powerful software solution for managing HPC clusters cost-effectively. Cisco UCS is a highly successful integrated server solution that offers a wide range of capabilities from a single vendor. Read more to Learn how these products provide an HPC cluster solution that offers cost-saving features.
Tags : 
     Bright Computing
By: Cisco     Published Date: Sep 27, 2019
The StackWise-480 architecture allows stacking of up to eight switches in a ring topology to achieve 480G of stack bandwidth. The latest Cisco Catalyst 9300 Series Switches support StackWise-480. This technology is flexible, modular, and evolutionary, and it delivers Cisco IOS XE feature capabilities with hardware acceleration to every port in the stack.
Tags : 
     Cisco
By: RMS     Published Date: Jul 25, 2019
The insurance industry boasts some of the most sophisticated modeling capabilities in the world. And yet the average property underwriter does not have access to the kind of predictive tools that carriers use at a portfolio level to manage risk aggregation, streamline reinsurance buying and optimize capitalization.
Tags : 
     RMS
By: MediaPass     Published Date: Apr 02, 2013
Read this paper to understand 3 sudden industry shifts that affect your site’s ability to survive in a changing landscape.
Tags : 
     MediaPass
By: Shell U.K. Limited     Published Date: Aug 30, 2019
Kami memanfaatkan teknologi gas-to-liquid (GTL) Shell untuk menciptakan process oils dengan kemurnian tinggi yang membuka peluang menarik bagi produk dan operasional Anda. Minyak proses konvensional berasal dari minyak mentah, sedangkan Shell Risella X dan Shell Ondina X terbuat dari gas sintesis murni. Itu membebaskan mereka dari kotoran dan variasi molekul besar yang ditemukan dalam minyak mineral. Menggunakan process oils GTL dapat meningkatkan proses dan produk akhir Anda untuk memberi Anda keunggulan kompetitif. Apakah Anda pernah mengalami masalah kualitas produk yang disebabkan oleh variasi dalam batch process oils? Apakah Anda memerlukan process oils dengan viskositas rendah, tetapi memiliki kekhawatiran tentang efek volatilitas pada kondisi kerja? Bisakah Anda menawarkan produk yang lebih baik jika Anda memiliki minyak olahan dengan karakteristik yang berbeda, misalnya, rentang distribusi hidrokarbon yang sangat sempit? Process oils GTL kami mengandung proporsi tinggi hidr
Tags : shell, risella, ondina, process oils, gas-to-liquids, gtl, purity, pure
     Shell U.K. Limited
By: Dell EMEA     Published Date: Sep 09, 2019
Quando si tratta di longevità, nessuno può reggere il confronto con Dell. Oltre a fornire capacità, gestibilità e funzionalità di protezione scelte dai dipartimenti IT, i nostri computer sono anche progettati per garantire cicli di vita più duraturi, con una conseguente riduzione degli sprechi. Non c'è da stupirsi che riscuotano successo nel mercato da così tanto tempo. Ma basta guardare al passato, parliamo piuttosto delle nuove funzionalità innovative. Il Latitude 7400 2-in-1 utilizza la nuova tecnologia ExpressSign-in di Dell che rileva la presenza dell'utente, attiva il sistema in circa un secondo e consente di effettuare l'accesso mediante riconoscimento facciale con Windows Hello. Gli utenti possono semplicemente sedersi alla scrivania e iniziare a lavorare, senza necessità di utilizzare combinazioni da tastiera per cambiare utente o addirittura toccare il tasto di accensione. Di fatto, è il primo PC al mondo a utilizzare un sensore di prossimità con tecnologia Intel® Context Se
Tags : 
     Dell EMEA
By: Oracle     Published Date: Sep 25, 2019
Research shows that legacy ERP 1.0 systems were not designed for usability and insight. More than three quarters of business leaders say their current ERP system doesn’t meet their requirements, let alone future plans 1. These systems lack modern best-practice capabilities needed to compete and grow. To enable today’s data-driven organization, the very foundation from which you are operating needs to be re-established; it needs to be “modernized”. Oracle’s goal is to help you navigate your own journey to modernization by sharing the knowledge we’ve gained working with many thousands of customers using both legacy and modern ERP systems. To that end, we’ve crafted this handbook outlining the fundamental characteristics that define modern ERP.
Tags : 
     Oracle
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com