capacity

Results 1 - 25 of 492Sort Results By: Published Date | Title | Company Name
By: IBM     Published Date: Sep 02, 2014
Comparing IBM performance against competitive offerings to help organizations deploy workloads with confidence IBM Platform Computing Cloud Service lets users economically add computing capacity by accessing ready-to-use clusters in the cloud-delivering high performance that compares favorably to cloud offerings from other providers. Tests show that the IBM service delivers the best (or ties for the best) absolute performance in all test categories. Learn More.
Tags : ibm, platform computing cloud services, benchmarking performance
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Feb 13, 2015
This major Hollywood studio wanted to improve the computer time required to render animated films. Using HPC solution powered by Platform LSF increased compute capacity allowing release of two major feature films and multiple animated shorts.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Lenovo UK     Published Date: May 13, 2019
It’s time to think bigger than incremental processing power and storage capacity upgrades. In 2019 IT modernisation offers much more, providing opportunities to closely align infrastructure with your most ambitious goals and build a future-facing foundation for success. Whatever your goals, this playbook gives you six versatile strategies that can help you plan a successful IT modernisation project, supported by examples from our industry-leading data centre portfolio.
Tags : 
     Lenovo UK
By: Group M_IBM Q2'19     Published Date: Apr 08, 2019
Empowering the Automotive Industry through Intelligent Orchestration With the increasing complexity and volume of cyberattacks, organizations must have the capacity to adapt quickly and confidently under changing conditions. Accelerating incident response times to safeguard the organization's infrastructure and data is paramount. Achieving this requires a thoughtful plan- one that addresses the security ecosystem, incorporates security orchestration and automation, and provides adaptive workflows to empower the security analysts. In the white paper "Six Steps for Building a Robust Incident Response Function" IBM Resilient provides a framework for security teams to build a strong incident response program and deliver organization-wide coordination and optimizations to accomplish these goals.
Tags : 
     Group M_IBM Q2'19
By: NetApp     Published Date: Dec 18, 2013
IT managers have indicated their two most significant challenges associated with managing unstructured data at multiple locations were keeping pace with data growth and improving data protection . Learn how the NetApp Distributed Content Repository provides advanced data protection and system recovery capabilities that can enable multiple data centers and remote offices to maintain access to data through hardware and software faults. Key benefits are: - continuous access to file data while maintaining data redundancy with no administrator intervention needed. - easily integrated and deployed into a distributed environment, providing transparent, centrally managed content storage - provision of secure multi-tenancy using security partitions. - provision effectively infinite, on-demand capacity while providing fast access to files and objects in the cloud. - secure, robust data protection techniques that enable data to persist beyond the life of the storage it resides on
Tags : 
     NetApp
By: Upsite Technologies     Published Date: Sep 18, 2013
The average computer room today has cooling capacity that is nearly four times the IT heat load. Using data from 45 sites reviewed by Upsite Technologies, this white paper will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF). Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and potential gains to be realized by AFM improvements.
Tags : ccf, upsite technologies, cooling capacity factor, energy costs, cooling, metrics, practical, benchmark
     Upsite Technologies
By: Spectrum Enterprise     Published Date: Feb 07, 2018
How Fiber Powers Growth – An Expert Q&A Guide provided by Spectrum Enterprise. Businesses today need bandwidth capacity to handle complex applications and ever-increasing data. See how technology experts rely on fiber to increase productivity and provide stronger growth opportunities.
Tags : 
     Spectrum Enterprise
By: HPE APAC     Published Date: Jun 20, 2017
What do HPE’s Flexible Capacity and a kitten-penguin have in common? They’re both hybrids, but only one is available for your IT infrastructure. Flexible Capacity lets you have it all—the scalability of the cloud and the control of on-premise infrastructure. Watch this video to find out more.
Tags : cloud, hybrid cloud, flexibility, infrastructure
     HPE APAC
By: HPE APAC     Published Date: Jun 20, 2017
HPE Flexible Capacity delivers a pay-as-you-go solution that enables you to scale instantly to handle growth needs without the usual long procurement process. Without tying up capital, your capacity doesn’t run out. Watch this video to find out more.
Tags : procurement, flexibility, growth, hpe
     HPE APAC
By: HPE APAC     Published Date: Jun 20, 2017
HPE Flexible Capacity service enables a cloud-like consumption model and economics for your on-premise IT. Now you don’t have to a make difficult choice between security and control of on-premise IT versus the agility and economics of public cloud. Watch this video to find out more.
Tags : cloud, agility, data center, security, public cloud
     HPE APAC
By: Spectrum Enterprise     Published Date: Oct 29, 2018
Bandwidth. Speed. Throughput. These terms are not interchangeable. They are interrelated concepts in data networking that help measure capacity, the time it takes to get from one point to the next and the actual amount of data you’re receiving, respectively. When you buy an Internet connection from Spectrum Enterprise, you’re buying a pipe between your office and the Internet with a set capacity, whether it is 25 Mbps, 10 Gbps, or any increment in between. However, the bandwidth we provide does not tell the whole story; it is the throughput of the entire system that matters. Throughput is affected by obstacles, overhead and latency, meaning the throughput of the system will never equal the bandwidth of your Internet connection. The good news is that an Internet connection from Spectrum Enterprise is engineered to ensure you receive the capacity you purchase; we proactively monitor your bandwidth to ensure problems are dealt with promptly, and we are your advocates across the Internet w
Tags : 
     Spectrum Enterprise
By: HP AppPulse Mobile     Published Date: Oct 12, 2015
Improve website monitoring, performance, capacity and security.
Tags : online, capacity, security, website monitoring, performance
     HP AppPulse Mobile
By: Cisco     Published Date: Dec 11, 2018
The Cisco SD-WAN solution is a cloud-delivered overlay WAN architecture that enables digital and cloud transformation at enterprises. It significantly reduces WAN costs and time to deploy new services, and, builds a robust security architecture crucial for hybrid networks. Enterprises today face major user experience problems for SaaS applications on account of networking problems. The centralized Internet exit architecture is inefficient and results in poor SaaS performance. And branch sites are running out of capacity to handle Internet traffic which is a concern because more than 50% of branch traffic is destined to the cloud. More importantly there are many dynamic changes in Internet gateways and the SaaS hosting servers that lead to unpredictability in performance. The Cisco SD-WAN solution solves these problems by creating multiple Internet exit points, adding high bandwidth at branch locations, and dynamically steering around problems in real-time, resulting is an optimal SaaS
Tags : 
     Cisco
By: Hewlett Packard Enterprise     Published Date: Jul 18, 2018
Case study briefs for Transguard, BiLFINGER, Aldermore PLC, PinkRoccade, Okinawa Cross Head, Siemens, Sogeti, Norrbottens, Dansk Supermarked Group, ITS Nordic.
Tags : 
     Hewlett Packard Enterprise
By: Hitachi Vantara     Published Date: Mar 08, 2019
Today, data center managers are being asked to do more than ever before: Bring on more applications at a faster pace, add more capacity to existing applications and deliver more performance.
Tags : 
     Hitachi Vantara
By: Hewlett Packard Enterprise     Published Date: May 11, 2018
Very little data is available on how effectively enterprises are managing private cloud deployments in the real world. Are they doing so efficiently, or are they facing challenges in areas such as performance, TCO and capacity? Hewlett Packard Enterprise commissioned 451 Research to explore these issues through a survey of IT decision-makers and data from the Cloud Price Index.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: May 11, 2018
If your business is like most, you are grappling with data storage. In an annual Frost & Sullivan survey of IT decision-makers, storage growth has been listed among top data center challenges for the past five years.2 With businesses collecting, replicating, and storing exponentially more data than ever before, simply acquiring sufficient storage capacity is a problem. Even more challenging is that businesses expect more from their stored data. Data is now recognized as a precious corporate asset and competitive differentiator: spawning new business models, new revenue streams, greater intelligence, streamlined operations, and lower costs. Booming market trends such as Internet of Things and Big Data analytics are generating new opportunities faster than IT organizations can prepare for them.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: May 10, 2019
Nimble Secondary Flash array represents a new type of data storage, designed to maximize both capacity and performance. By adding high-performance flash storage to a capacity-optimized architecture, it provides a unique backup platform that lets you put your backup data to work. Nimble Secondary Flash array uses flash performance to provide both near-instant backup and recovery from any primary storage system. It is a single device for backup, disaster recovery, and even local archiving. By using flash, you can accomplish real work such as dev/test, QA, and analytics. Deep integration with Veeam’s leading backup software simplifies data lifecycle management and provides a path to cloud archiving.
Tags : 
     Hewlett Packard Enterprise
By: HPE     Published Date: Jan 04, 2016
Think it's too complicated to change your storage strategy? Think again! With HP 3PAR StoreServ Storage it’s never been easier to migrate from legacy EMC, HDS or HP EVA while shrinking footprint, improving capacity, at a lower price point!
Tags : 
     HPE
By: HPE Intel     Published Date: Jan 11, 2016
While flash storage can enhance the performance of your applications, there are three potential roadblocks to realizing the full value from a flash investment: • Storage network capacity • Storage architecture • Resiliency Vish Mulchand is Senior Director of Product Management and Marketing for Storage, Hewlett Packard Enterprise.
Tags : 
     HPE Intel
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com