center

Results 1 - 25 of 3043Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
As global energy costs climb, Cray has taken its long-standing expertise in optimizing power and cooling and focused it on developing overall system energy efficiency. The resulting Cray XC supercomputer series integrates into modern datacenters and achieves high levels of efficiency while minimizing system and infrastructure costs.
Tags : 
     Cray
By: Green Revolution Cooling     Published Date: Feb 20, 2014
This paper examines the advantages of liquid submersion cooling and, in particular, takes a closer look at GreenDEF™, the dielectric mineral oil blend used by Green Revolution Cooling, a global leader in submersion cooling technologies. Further, the paper will address concerns of potential adopters of submersion systems and explain why these systems can actually improve performance in servers and protect expensive data center investments.
Tags : 
     Green Revolution Cooling
By: Green Revolution Cooling     Published Date: May 12, 2014
Download Green Revolution Cooling’s White Paper “Data Center Floor Space Utilization – Comparing Density in Liquid Submersion and Air Cooling Systems” to learn about the density of liquid submersion cooling, how looks can be deceiving and how, more times than not, liquid cooling once again has air beat.
Tags : green revolution, data center floor space
     Green Revolution Cooling
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Nov 14, 2014
The necessary compute power to drive modern biomedical research is a formidable and familiar challenge throughout the life sciences. Underlying infrastructures must evolve to keep pace with innovation. In such demanding HPC environments, Cloud computing technologies represent a powerful approach to managing technical computing resources. This paper shares valuable insight to life science centers leveraging cloud concepts to manage their infrastructure.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: Dell and Intel®     Published Date: Nov 18, 2015
The NCSA Private Sector Program creates a high-performance computing cluster to help corporations overcome critical challenges. Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage highperformance computing in ways that make them more competitive.”
Tags : 
     Dell and Intel®
By: Penguin Computing     Published Date: Oct 14, 2015
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
Tags : 
     Penguin Computing
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: Genesys     Published Date: Feb 12, 2019
What if the cloud could radically improve your customer’s experience, your operations, and your bottom line? There’s a reason why many organisations are taking advantage of the benefits of cloud for contact centers. This eBook, focuses on two profiles for small contact centers, small business and small of large – a small contact center that is part of a much larger enterprise. Get key insights from independent market research that will help you make a case to take your customer communications platform to the cloud. With the right solution, your business can benefit from enterprise-quality capabilities at a price you can afford. And you can realise a return on investment in as little as three months! Download this eBook and learn: How to calculate ROI and time-to-value in different types of small contact center profiles What factors to consider when selecting a cloud vendor Three common myths about the cloud
Tags : 
     Genesys
By: Forcepoint     Published Date: May 14, 2019
2018 NSS Labs SD-WAN Group Test In this report, NSS Labs simulated an enterprise network that has branches connected to a data center through two links: an MPLS line and a commercial broadband connection. They reviewed a select number of vendors, testing their throughput performance, video quality and VoIP quality as well as security effectiveness. NSS Labs verified that Forcepoint NGFW handled all of their use cases and offers all the operational capabilities that they recommend as necessary for SD-WAN as well as scoring 100% across all security tests, blocking all evasion techniques. “Forcepoint is one of the few vendors to support all of the use cases and capabilities we tested as well as strong security in their SD-WAN solution. They should be on the short list for any organization that’s looking to connect and protect their distributed enterprise.” - Vikram Phatak, CEO, NSS Labs Read the report and learn how Forcepoint delivers SD-WAN with enterprise scale and security to ma
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
All Clouds are not Equal Today’s organizations turn to the cloud for all types of productivity-gaining tools – including security. Features such as security for mobile users and data loss protection are key, but it’s also important to separate fact from fiction when looking at the infrastructure of the provider. This ebook helps you consider the importance of: Data center locations Security controls, data privacy, availability, reliability, and performance What your organization or agency actually needs Download the ebook to discover five common misconceptions about cloud-based security infrastructure, and what you should really be looking for in a cloud-based security solution.
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
2018 NSS Labs SD-WAN Group Test In this report, NSS Labs simulated an enterprise network that has branches connected to a data center through two links: an MPLS line and a commercial broadband connection. They reviewed a select number of vendors, testing their throughput performance, video quality and VoIP quality as well as security effectiveness. NSS Labs verified that Forcepoint NGFW handled all of their use cases and offers all the operational capabilities that they recommend as necessary for SD-WAN as well as scoring 100% across all security tests, blocking all evasion techniques. “Forcepoint is one of the few vendors to support all of the use cases and capabilities we tested as well as strong security in their SD-WAN solution. They should be on the short list for any organization that’s looking to connect and protect their distributed enterprise.” - Vikram Phatak, CEO, NSS Labs Read the report and learn how Forcepoint delivers SD-WAN with enterprise scale and security to ma
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
All Clouds are not Equal Today’s organizations turn to the cloud for all types of productivity-gaining tools – including security. Features such as security for mobile users and data loss protection are key, but it’s also important to separate fact from fiction when looking at the infrastructure of the provider. This ebook helps you consider the importance of: Data center locations Security controls, data privacy, availability, reliability, and performance What your organization or agency actually needs Download the ebook to discover five common misconceptions about cloud-based security infrastructure, and what you should really be looking for in a cloud-based security solution.
Tags : 
     Forcepoint
By: Rackspace     Published Date: May 15, 2019
More and more enterprises like yours are moving critical business applications to AWS as part of a digital transformation strategy, but many don't know where to start. Getting out of your data center and onto AWS can provide significant business benefits, including improved agility, enhanced scalability and cost savings. But are you realizing the full benefit of a move to the AWS cloud?
Tags : 
     Rackspace
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com