data center

Results 1 - 25 of 2066Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
As global energy costs climb, Cray has taken its long-standing expertise in optimizing power and cooling and focused it on developing overall system energy efficiency. The resulting Cray XC supercomputer series integrates into modern datacenters and achieves high levels of efficiency while minimizing system and infrastructure costs.
Tags : 
     Cray
By: Green Revolution Cooling     Published Date: Feb 20, 2014
This paper examines the advantages of liquid submersion cooling and, in particular, takes a closer look at GreenDEF™, the dielectric mineral oil blend used by Green Revolution Cooling, a global leader in submersion cooling technologies. Further, the paper will address concerns of potential adopters of submersion systems and explain why these systems can actually improve performance in servers and protect expensive data center investments.
Tags : 
     Green Revolution Cooling
By: Green Revolution Cooling     Published Date: May 12, 2014
Download Green Revolution Cooling’s White Paper “Data Center Floor Space Utilization – Comparing Density in Liquid Submersion and Air Cooling Systems” to learn about the density of liquid submersion cooling, how looks can be deceiving and how, more times than not, liquid cooling once again has air beat.
Tags : green revolution, data center floor space
     Green Revolution Cooling
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: Penguin Computing     Published Date: Oct 14, 2015
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
Tags : 
     Penguin Computing
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: SAP     Published Date: Jul 23, 2018
Leading companies are making digital transformation a reality put data and intelligence at the center of their future. They are building new capabilities, skills, and technology, and evolving their culture to transform into an ‘Intelligent Enterprise’ and achieve the aforementioned outcomes. These companies are not only delivering short-term value to shareholders, but are also positioned to thrive and transform their industry. Explore how SAP can help you navigate the journey to the Intelligent Enterprise.
Tags : 
     SAP
By: Hewlett Packard Enterprise     Published Date: Jul 18, 2018
"Managing infrastructure has always brought with it frustration, headaches and wasted time. That’s because IT professionals have to spend their days, nights and weekends dealing with problems that are disruptive to their applications and organization and manually tune their infrastructure. And, the challenges increase as the number of applications and reliance on infrastructure continues to grow. Luckily, there is a better way. HPE InfoSight is artificial intelligence (AI) that predicts and prevents problems across the infrastructure stack and ensures optimal performance and efficient resource use. "
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jul 18, 2018
"Principled Technologies executed four typical deployment and management scenarios using both HPE Synergy and Cisco UCS. They found that HPE Synergy saved 71.5 minutes and 86 steps, and used four fewer tools compared to Cisco UCS. In a hypothetical 200-node datacenter, that’s a total of 9 work weeks, or just over 2 months’ time savings on routine tasks."
Tags : 
     Hewlett Packard Enterprise
By: Dell     Published Date: Sep 24, 2018
The IDPA DP4400 provides modern and powerful data protection for midsize organizations allowing companies to leverage the benefits of the cloud within their existing environments. The DP4400 can help transform your environment for the future, laying the technical foundation for the data center while modernizing your data protection for the cloud.
Tags : data, protection, cloud, integration, idpa
     Dell
By: Dell Server     Published Date: Aug 08, 2018
Transform your IT with the clear winner in server innovation Greater performance, agility and security are the new imperatives of the modern data center. Dell EMC PowerEdge servers — with Intel® Xeon® Scalable processors — are ready to meet those demands, today and tomorrow.
Tags : 
     Dell Server
By: Dell Server     Published Date: Aug 08, 2018
This paper provides an overview of the changing dynamics in the business world that demand a new approach to IT infrastructure. It provides a perspective for business managers and executives who are looking for a way to align business and IT by facing the challenges of disruption for better business outcomes. We will discuss the Kinetic Infrastructure from Dell EMC powered by Intel® Xeon® Platinum processor, which is designed to support IT flexibility and business agility. In addition, we will describe the first implementation of kinetic infrastructure on the Dell EMC PowerEdge MX system. The paper will explain how Dell EMC is helping businesses to rethink their data center architecture and accelerate their path towards more agility.
Tags : 
     Dell Server
By: VMware     Published Date: Oct 02, 2018
Digital disruption is fundamentally changing IT. Today’s organizations are under more pressure than ever to innovate fast and offer a superior experience to every customer. In this white paper, we explore the advantages that a modernized data center can bring for IT organizations seeking to keep pace in a dynamic environment, and how a software-defined approach can help move them forward. Real-world examples showcase how VMware is enabling IT teams to develop future-proof strategies with a foundation that is ready for cloud environments as well as global expansion and customer acquisitions. Submit the form to read this latest whitepaper to discover the advantages and how you could develop a future-proof strategy with VMware.
Tags : 
     VMware
By: Evariant     Published Date: Aug 22, 2018
Health systems that implement CRM-based engagement solutions stand to achieve significant strategic gains, from acquiring and retaining patients to supporting more positive clinical outcomes to increasing referrals to member providers and practices. Driven by easy access to rich consumer data and analytic profiling, as well as patient encounter histories, CRM-based engagement centers optimized for healthcare enable health system CSRs to quickly offer personalized responses to consumer or patient inquiries via multiple in- and outbound media, including telephone, email, social, and web.
Tags : transforming, engagement, center, crm, health, systems
     Evariant
By: Turbonomic     Published Date: Jul 05, 2018
The hybrid cloud has been heralded as a promising IT operational model enabling enterprises to maintain security and control over the infrastructure on which their applications run. At the same time, it promises to maximize ROI from their local data center and leverage public cloud infrastructure for an occasional demand spike. Public clouds are relatively new in the IT landscape and their adoption has accelerated over the last few years with multiple vendors now offering solutions as well as improved on-ramps for workloads to ease the adoption of a hybrid cloud model. With these advances and the ability to choose between a local data center and multiple public cloud offerings, one fundamental question must still be answered: What, when and where to run workloads to assure performance while maximizing efficiency? In this whitepaper, we explore some of the players in Infrastructure-as-a-Service (IaaS) and hybrid cloud, the challenges surrounding effective implementation, and how to iden
Tags : 
     Turbonomic
By: Turbonomic     Published Date: Jul 05, 2018
Organizations are adopting cloud computing to accelerate service delivery. Some try to deliver cloud economies of scale in their private data centers with the mantra “automate everything,” a philosophy often simpler in theory than practice. Others have opted to leverage public cloud resources for the added benefit of the pay-as-you-go model, but are finding it difficult to keep costs in check. Regardless of approach, cloud technology poses the same challenge IT has faced for decades: how to assure application performance while minimizing costs.
Tags : 
     Turbonomic
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com