data center

Results 1 - 25 of 1877Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
As global energy costs climb, Cray has taken its long-standing expertise in optimizing power and cooling and focused it on developing overall system energy efficiency. The resulting Cray XC supercomputer series integrates into modern datacenters and achieves high levels of efficiency while minimizing system and infrastructure costs.
Tags : 
     Cray
By: Green Revolution Cooling     Published Date: Feb 20, 2014
This paper examines the advantages of liquid submersion cooling and, in particular, takes a closer look at GreenDEF™, the dielectric mineral oil blend used by Green Revolution Cooling, a global leader in submersion cooling technologies. Further, the paper will address concerns of potential adopters of submersion systems and explain why these systems can actually improve performance in servers and protect expensive data center investments.
Tags : 
     Green Revolution Cooling
By: Green Revolution Cooling     Published Date: May 12, 2014
Download Green Revolution Cooling’s White Paper “Data Center Floor Space Utilization – Comparing Density in Liquid Submersion and Air Cooling Systems” to learn about the density of liquid submersion cooling, how looks can be deceiving and how, more times than not, liquid cooling once again has air beat.
Tags : green revolution, data center floor space
     Green Revolution Cooling
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: Penguin Computing     Published Date: Oct 14, 2015
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
Tags : 
     Penguin Computing
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: Hewlett Packard Enterprise     Published Date: Feb 05, 2018
As businesses plunge into the digital future, no asset will have a greater impact on success than data. The ability to collect, harness, analyze, protect, and manage data will determine which businesses disrupt their industries, and which are disrupted; which businesses thrive, and which disappear. But traditional storage solutions are not designed to optimally handle such a critical business asset. Instead, businesses need to adopt an all-flash data center. In their new role as strategic business enablers, IT leaders have the responsibility to ensure that their businesses are protected, by investing in flexible, future-proof flash storage solutions. The right flash solution can deliver on critical business needs for agility, rapid growth, speed-to-market, data protection, application performance, and cost-effectiveness—while minimizing the maintenance and administration burden.
Tags : data, storage, decision makers, hpe
     Hewlett Packard Enterprise
By: Schneider Electric     Published Date: Feb 06, 2018
The growth and importance of edge and cloud-based applications are driving the data center industry to rethink the optimum level of redundancy of physical infrastructure equipment. Read our recommendations for evaluating resiliency needs in White Paper 256: "Why Cloud Computing is Requiring Us to Rethink Resiliency at the Edge."
Tags : edge computing, data center, cloud computing, cloud based applications
     Schneider Electric
By: APC by Schneider Electric     Published Date: Feb 12, 2018
Small server rooms and branch offices are typically unorganized, unsecure, hot, unmonitored, and space constrained. These conditions can lead to system downtime or, at the very least, lead to “close calls” that get management’s attention. Practical experience with these problems reveals a short list of effective methods to improve the availability of IT operations within small server rooms and branch offices. This paper discusses making realistic improvements to power, cooling, racks, physical security, monitoring, and lighting. The focus of this paper is on small server rooms and branch offices with up to 10kW of IT load.
Tags : 
     APC by Schneider Electric
By: F5 Networks Inc     Published Date: Jan 25, 2018
With digital transformation reshaping the modern enterprise, applications represent a new class of assets and an important source of differentiation. The ever-more-competitive digital economy requires that your applications be delivered with unprecedented speed, scale, and agility, which is why more and more organizations are turning to the cloud. This explosive growth of apps hosted in the cloud creates a world of opportunities—and a whole new set of challenges for organizations that must now deploy and manage a vast portfolio of applications in multi-cloud environments. Automation and orchestration systems can help streamline and standardize IT processes across traditional data centers, private clouds, and public clouds. But with rapid innovation come concerns about security and delivering a consistent experience across environments.
Tags : application delivery, digital transformation, cloud technology, application, multi-cloud enviornment
     F5 Networks Inc
By: Zendesk     Published Date: Jan 19, 2018
Customers are more technically savvy than ever and have come to prefer the DIY approach to solving their issues and answering their own questions. Years of research by ICMI has confirmed that customers prefer to resolve issues themselves and within their channels of choice. Furthermore, customers only seek direct interactions when they unsuccessfully exhaust their self-service options. This is backed up by data from American Express, which found that 48% of consumers prefer to speak with a customer service rep when dealing with complex issues, but only 16% prefer the same contact for simple issues. The goal of this paper is simple: We want to help you build an all-in-one knowledge base, community, and customer portal. All of which can be accomplished with a help center like Zendesk Guide.
Tags : customer support software, customer service tool, customer relationship, multichannel support tool, customer support vendor evaluation, knowledge base management, help desk software, help desk portal
     Zendesk
By: NetApp     Published Date: Dec 18, 2013
IT managers have indicated their two most significant challenges associated with managing unstructured data at multiple locations were keeping pace with data growth and improving data protection . Learn how the NetApp Distributed Content Repository provides advanced data protection and system recovery capabilities that can enable multiple data centers and remote offices to maintain access to data through hardware and software faults. Key benefits are: - continuous access to file data while maintaining data redundancy with no administrator intervention needed. - easily integrated and deployed into a distributed environment, providing transparent, centrally managed content storage - provision of secure multi-tenancy using security partitions. - provision effectively infinite, on-demand capacity while providing fast access to files and objects in the cloud. - secure, robust data protection techniques that enable data to persist beyond the life of the storage it resides on
Tags : 
     NetApp
By: ASG Software Solutions     Published Date: Nov 05, 2009
Effective workload automation that provides complete management level visibility into real-time events impacting the delivery of IT services is needed by the data center more than ever before. The traditional job scheduling approach, with an uncoordinated set of tools that often requires reactive manual intervention to minimize service disruptions, is failing more than ever due to todays complex world of IT with its multiple platforms, applications and virtualized resources.
Tags : asg, cmdb, bsm, itil, bsm, metacmdb, workload automation, wla
     ASG Software Solutions
By: ThousandEyes     Published Date: Jan 14, 2018
ThousandEyes monitors and solves application delivery problems throughout the Internet due to data center outages, ISP performance, CDN caching and DNS availability.
Tags : thousandeyes, application delivery, isp performance, network performance, network monitoring
     ThousandEyes
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com