workload data

Results 1 - 25 of 250Sort Results By: Published Date | Title | Company Name
By: IBM     Published Date: Sep 16, 2015
The IBM Spectrum Scale solution provided for up to 11x better throughput results than EMC Isilon for Spectrum Protect (TSM) workloads. Using published data, Edison compared a solution comprised of EMC® Isilon® against an IBM® Spectrum Scale™ solution. (IBM Spectrum Scale was formerly IBM® General Parallel File System™ or IBM® GPFS™, also known as code name Elastic Storage). For both solutions, IBM® Spectrum Protect™ (formerly IBM Tivoli® Storage Manager or IBM® TSM®) is used as a common workload performing the backups to target storage systems evaluated.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: IBM     Published Date: Nov 14, 2014
View this demo to learn how IBM Platform Computing Cloud Service running on the SoftLayer Cloud helps you: quickly get your applications deployed on ready-to-run clusters in the cloud; manage workloads seamlessly between on-premise and cloud-based resources; get help from the experts with 24x7 Support; share and manage data globally; and protect your IP through physical isolation of bare metal hardware assets.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
To quickly and economically meet new and peak demands, Platform LSF (SaaS) and Platform Symphony (SaaS) workload management as well as Elastic Storage on Cloud data management software can be delivered as a service, complete with SoftLayer cloud infrastructure and 24x7 support for technical computing and service-oriented workloads. Watch this demonstration to learn how the IBM Platform Computing Cloud Service can be used to simplify and accelerate financial risk management using IBM Algorithmics.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Nov 18, 2015
Unleash the extreme performance and scalability of the Lustre® parallel file system for high performance computing (HPC) workloads, including technical ‘big data’ applications common within today’s enterprises. The Dell Storage for HPC with Intel® Enterprise Edition (EE) for Lustre Solution allows end-users that need the benefits of large–scale, high bandwidth storage to tap the power and scalability of Lustre, with its simplified installation, configuration, and management features that are backed by Dell and Intel®.
Tags : 
     Dell and Intel®
By: ASG Software Solutions     Published Date: Nov 05, 2009
Effective workload automation that provides complete management level visibility into real-time events impacting the delivery of IT services is needed by the data center more than ever before. The traditional job scheduling approach, with an uncoordinated set of tools that often requires reactive manual intervention to minimize service disruptions, is failing more than ever due to todays complex world of IT with its multiple platforms, applications and virtualized resources.
Tags : asg, cmdb, bsm, itil, bsm, metacmdb, workload automation, wla, visibility, configuration management, metadata, metacmdb, lob, sdm, service dependency mapping, ecommerce, bpm, workflow, itsm, critical application
     ASG Software Solutions
By: ASG Software Solutions     Published Date: Feb 24, 2010
A recent survey of CIOs found that over 75% want to develop an overall information strategy in the next three years, yet over 85% are not close to implementing an enterprise-wide content management strategy. Meanwhile, data runs rampant, slows systems, and impacts performance. Hard-copy documents multiply, become damaged, or simply disappear.
Tags : asg, cmdb, bsm, itil, bsm, metacmdb, archiving, sap, ilm, mobius, workload automation, wla, visibility, configuration management, metadata, metacmdb, lob, sdm, service dependency mapping, ecommerce
     ASG Software Solutions
By: ASG Software Solutions     Published Date: Feb 23, 2010
There are success stories of businesses that have implemented Business Service Management (BSM) with well-documented, bottom-line results. What do these organizations know that their discouraged counterparts don't?
Tags : asg, cmdb, bsm, itil, bsm, metacmdb, archiving, sap, ilm, mobius, workload automation, wla, visibility, configuration management, metadata, metacmdb, lob, sdm, service dependency mapping, ecommerce
     ASG Software Solutions
By: Hewlett Packard Enterprise     Published Date: May 11, 2018
Most IT professionals today recognize that enterprise IT will be hybrid in the future. To provide the optimal foundation for each workload being deployed, the hybrid IT environment will include cloud-based infrastructures—from multiple providers—co-existing alongside infrastructure within the enterprise data center or a hosted environment. But not all hyperconverged solutions yield the same results. The right hyperconverged infrastructure can meet your IT needs both today and well into the future. In this paper, we will talk about where your data center needs to be in the next five years to meet changing business demands, and how the roles of IT professionals will evolve. We will also review “hyperconvergence” models, and how they can best meet your IT needs both today and in the future, as well as the benefits you can expect along the way. Finally, we discuss what to look for in the right hyperconverged provider, who will position your IT department for success.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Mar 26, 2018
Over the past several years, the IT industry has seen solid-state (or flash) technology evolve at a record pace. Early on, the high cost and relative newness of flash meant that it was mainly relegated to accelerating niche workloads. More recently, however, flash storage has “gone mainstream” thanks to maturing media technology. Lower media cost has resulted from memory innovations that have enabled greater density and new architectures such as 3D NAND. Simultaneously, flash vendors have refined how to exploit flash storage’s idiosyncrasies—for example, they can extend the flash media lifespan through data reduction and other technique
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Mar 26, 2018
Today’s data centers are expected to deploy, manage, and report on different tiers of business applications, databases, virtual workloads, home directories, and file sharing simultaneously. They also need to co-locate multiple systems while sharing power and energy. This is true for large as well as small environments. The trend in modern IT is to consolidate as much as possible to minimize cost and maximize efficiency of data centers and branch offices. HPE 3PAR StoreServ is highly efficient, flash-optimized storage engineered for the true convergence of block, file, and object access to help consolidate diverse workloads efficiently. HPE 3PAR OS and converged controllers incorporate multiprotocol support into the heart of the system architecture
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Mar 26, 2018
Modern storage arrays can’t compete on price without a range of data reduction technologies that help reduce the overall total cost of ownership of external storage. Unfortunately, there is no one single data reduction technology that fits all data types and we see savings being made with both data deduplication and compression, depending on the workload. Typically, OLTP-type data (databases) work well with compression and can achieve between 2:1 and 3:1 reduction, depending on the data itself. Deduplication works well with large volumes of repeated data like virtual machines or virtual desktops, where many instances or images are based off a similar “gold” master.
Tags : 
     Hewlett Packard Enterprise
By: Commvault     Published Date: Jul 06, 2016
Today, nearly every datacenter has become heavily virtualized. In fact, according to Gartner as many as 75% of X86 server workloads are already virtualized in the enterprise datacenter. Yet even with the growth rate of virtual machines outpacing the rate of physical servers, industry wide, most virtual environments continue to be protected by backup systems designed for physical servers, not the virtual infrastructure they are used on. Even still, data protection products that are virtualization-focused may deliver additional support for virtual processes, but there are pitfalls in selecting the right approach. This paper will discuss five common costs that can remain hidden until after a virtualization backup system has been fully deployed.
Tags : storage, backup, recovery, best practices, networking, it management, enterprise applications, data management
     Commvault
By: Oracle CX     Published Date: Oct 19, 2017
Oracle has just announced a new microprocessor, and the servers and engineered system that are powered by it. The SPARC M8 processor fits in the palm of your hand, but it contains the result of years of co-engineering of hardware and software together to run enterprise applications with unprecedented speed and security. The SPARC M8 chip contains 32 of today’s most powerful cores for running Oracle Database and Java applications. Benchmarking data shows that the performance of these cores reaches twice the performance of Intel’s x86 cores. This is the result of exhaustive work on designing smart execution units and threading architecture, and on balancing metrics such as core count, memory and IO bandwidth. It also required millions of hours in testing chip design and operating system software on real workloads for database and Java. Having faster cores means increasing application capability while keeping the core count and software investment under control. In other words, a boost
Tags : 
     Oracle CX
By: Oracle CX     Published Date: Oct 19, 2017
Business Enterprises today need to become more agile, meet new and increasing workload and security requirements, while reducing overall IT cost and risk. To meet these requirements many companies are turning to cloud computing. To remain competitive companies need to formulate a strategy that can easily move them from traditional on-premises IT to private or public clouds. A complete cloud strategy will likely include both private and public clouds because some applications and data might not be able to move to a public cloud. Moving to the cloud should not create information silos but should improve data sharing. Any cloud strategy should make sure that it is possible to integrate on-premises, private cloud and public cloud data and applications. Furthermore, any on-premises cloud deployments must be able to easily migrate to public cloud in the future
Tags : 
     Oracle CX
By: Oracle CX     Published Date: Oct 19, 2017
Business Enterprises today need to become more agile, meet new and increasing workload and security requirements, while reducing overall IT cost and risk. To meet these requirements many companies are turning to cloud computing. To remain competitive companies need to formulate a strategy that can easily move them from traditional on-premises IT to private or public clouds. A complete cloud strategy will likely include both private and public clouds because some applications and data might not be able to move to a public cloud. Moving to the cloud should not create information silos but should improve data sharing. Any cloud strategy should make sure that it is possible to integrate on-premises, private cloud and public cloud data and applications. Furthermore, any on-premises cloud deployments must be able to easily migrate to public cloud in the future.
Tags : 
     Oracle CX
By: Oracle CX     Published Date: Oct 20, 2017
This whitepaper explores the new SPARC S7 server features and then compares this offering to a similar x86 offering. The key characteristics of the SPARC S7 to be highlighted are: ? Designed for scale-out and cloud infrastructures ? SPARC S7 processor with greater core performance than the latest Intel Xeon E5 processor ? Software in Silicon which offers hardware-based features such as data acceleration and security The SPARC S7 is then compared to a similar x86 solution from three different perspectives, namely performance, risk and cost. Performance matters as business markets are driving IT to provide an environment that: ? Continuously provides real-time results. ? Processes more complex workload stacks. ? Optimizes usage of per-core software licenses. Risk matters today and into the foreseeable future, as challenges to secure systems and data are becoming more frequent and invasive from within and from outside. Oracle SPARC systems approach risk management from multiple perspectiv
Tags : 
     Oracle CX
By: Oracle CX     Published Date: Oct 20, 2017
Oracle has just announced a new microprocessor, and the servers and engineered system that are powered by it. The SPARC M8 processor fits in the palm of your hand, but it contains the result of years of co-engineering of hardware and software together to run enterprise applications with unprecedented speed and security. The SPARC M8 chip contains 32 of today’s most powerful cores for running Oracle Database and Java applications. Benchmarking data shows that the performance of these cores reaches twice the performance of Intel’s x86 cores. This is the result of exhaustive work on designing smart execution units and threading architecture, and on balancing metrics such as core count, memory and IO bandwidth. It also required millions of hours in testing chip design and operating system software on real workloads for database and Java. Having faster cores means increasing application capability while keeping the core count and software investment under control. In other words, a boost
Tags : 
     Oracle CX
By: IBM     Published Date: Jun 29, 2018
LinuxONE from IBM is an example of a secure data-serving infrastructure platform that is designed to meet the requirements of current-gen as well as next-gen apps. IBM LinuxONE is ideal for firms that want the following: ? Extreme security: Firms that put data privacy and regulatory concerns at the top of their requirements list will find that LinuxONE comes built in with best-in-class security features such as EAL5+ isolation, crypto key protection, and a Secure Service Container framework. ? Uncompromised data-serving capabilities: LinuxONE is designed for structured and unstructured data consolidation and optimized for running modern relational and nonrelational databases. Firms can gain deep and timely insights from a "single source of truth." ? Unique balanced system architecture: The nondegrading performance and scaling capabilities of LinuxONE — thanks to a unique shared memory and vertical scale architecture — make it suitable for workloads such as databases and systems of reco
Tags : 
     IBM
By: Dell PC Lifecycle     Published Date: Mar 09, 2018
Komprimierungsalgorithmen sorgen dafür, dass weniger Bit benötigt werden, um einen bestimmten Datensatz zu repräsentieren. Je höher das Komprimierungsverhältnis, desto mehr Speicherplatz wird durch dieses spezielle Datenreduzierungsverfahren eingespart. Während unseres OLTP-Tests erreichte das Unity-Array bei den Datenbank-Volumes ein Komprimierungsverhältnis von 3,2:1, während das 3PAR-Array im Schnitt nur ein Verhältnis von 1,3:1 erreichte. In unserem Data Mart-Ladetest erzielte das 3PAR bei den Datenbank-Volumes ein Verhältnis von 1,4:1, das Unity-Array nur 1,3:1.
Tags : 
     Dell PC Lifecycle
By: Dell PC Lifecycle     Published Date: Aug 13, 2018
The all-flash Dell EMC SC5020 storage array handled transactional database workloads and data mart imports better than an HPE solution without sacrificing performance. With the all-flash Dell EMC™ SC5020 storage array, your company could attend to more customer orders each minute and save time while simultaneously importing data.
Tags : 
     Dell PC Lifecycle
Start   Previous   1 2 3 4 5 6 7 8 9 10    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com