search performance

Results 1 - 25 of 191Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: IBM     Published Date: Sep 02, 2014
Whether engaged in genome sequencing, drug design, product analysis or risk management, life sciences research needs high-performance technical environments with the ability to process massive amounts of data and support increasingly sophisticated simulations and analyses. Learn how IBM solutions such as IBM® Platform Computing™ high-performance cluster, grid and high-performance computing (HPC) cloud management software can help life sciences organizations transform and integrate their compute environments to develop products better, faster and at less expense.
Tags : ibm, life sciences, platform computing
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: Bright Computing     Published Date: Feb 22, 2016
Cloud computing offers organizations a clear opportunity to introduce operational efficiency that would be too difficult or costly to achieve internally. As such, we are continuing to see an increase in cloud adoption for workloads across the commercial market. However, recent research suggests that -- despite continued increases in companies reporting that they are using cloud computing -- the vast majority of corporate workloads remain on premise. It is clear that companies are carefully considering the security, privacy, and performance aspects of cloud transition and struggling to achieve cloud adoption with limited internal cloud expertise. Register to read more.
Tags : 
     Bright Computing
By: Dell and Intel®     Published Date: Nov 18, 2015
The NCSA Private Sector Program creates a high-performance computing cluster to help corporations overcome critical challenges. Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage highperformance computing in ways that make them more competitive.”
Tags : 
     Dell and Intel®
By: TYAN     Published Date: Jun 06, 2016
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
Tags : 
     TYAN
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: Gigamon     Published Date: Sep 03, 2019
Network operations teams can no longer ignore the application layer. Application experience can make or break a digital enterprise, and today most enterprises are digital. To deliver optimal performance, network operations tools must be application-aware. However, application-awareness in the network and security tool layer is expensive and difficult to scale. Enterprises can mitigate these challenges with a network visibility architecture that includes application-aware network packet brokers (NPBs). EMA recommends that today’s network operations teams modernize their approach with full application visibility. EMA research has found that network teams are increasingly focused on directly addressing security risk reduction, service quality, end-user experience, and application performance. All of these new network operations benchmarks will require deeper application-level visibility. For instance, a network team focused on service quality will want to take a top-down approach to perfo
Tags : 
     Gigamon
By: Intel     Published Date: Sep 30, 2019
Mountains of data promise valuable insights and innovation for businesses that rethink and redesign their system architectures. But companies that don’t re-architect might find themselves scrambling just to keep from being buried in the avalanche of data. The problem is not just in storing raw data, though. For businesses to stay competitive, they need to quickly and cost-effectively access and process all that data for business insights, research, artificial intelligence (AI), and other uses. Both memory and storage are required to enable this level of processing, and companies struggle to balance high costs against limited capacities and performance constraints. The challenge is even more daunting because different types of memory and storage are required for different workloads. Furthermore, multiple technologies might be used together to achieve the optimal tradeoff in cost versus performance. Intel is addressing these challenges with new memory and storage technologies that emp
Tags : 
     Intel
By: ASG Software Solutions     Published Date: May 05, 2010
Read this topical and informative white paper from EMA Research and see how you can attain peace of mind that your end user's application performance will exceed expectations.
Tags : asg, end user, service assurance, ema, it infrastructure, service management
     ASG Software Solutions
By: Gigaom     Published Date: Oct 16, 2019
This free 1-hour webinar from GigaOm Research brings together experts in IT infrastructure and cloud featuring GigaOm analyst Mark Thiele and special guest from Packet, Jacob Smith. The discussion will focus on what can be done if a business has a need or desire to run application environments outside public cloud; and, in the age of public cloud adoption, what can companies do to remain competitive from an IT perspective. Most companies aren’t professional data center or infrastructure builders, and they shouldn’t have to be. This is in fact a driver for cloud adoption. Price, Performance, Proximity, Pride and Politics are reasons why enterprises might need more control at times over their infrastructure. This 1-hour webinar will explore: The considerations an enterprise must make when looking for a private cloud solution Examples of what others have done with managed private cloud If Managed Private Cloud can be cost-competitive with Public Cloud or on-prem IT. The high-level defi
Tags : 
     Gigaom
By: AWS     Published Date: Jul 24, 2019
Modernize your applications quickly Many organizations are finding their legacy technology to be inadequate due to its rigidity and high costs. In search of more flexible and cost-effective solutions, many are turning to the cloud. Most Independent Software Vendors (ISVs), however, still build their solutions on commercial databases that are expensive and time-consuming for their customers to deploy and maintain. This is a significant undertaking that can hinder adoption. To address these challenges head on, ISVs can modernize their applications on Amazon Aurora, a database offering from AWS that combines the reliability and performance of commercial databases with the cost-effectiveness and flexibility of open source solutions. In this webinar, you will learn how Virtusa, an AWS Partner Network (APN) Premier Consulting Partner, can accelerate your application modernization journey and help you grow your business. PTNR_ipc-database-migration-icon_245×155_Mar-2019.png View the webin
Tags : 
     AWS
By: Hewlett Packard Enterprise     Published Date: Jul 19, 2018
"This research by Nimble Storage, a Hewlett Packard Enterprise Company, outlines the top five causes of application delays. The report analyzes more than 12,000 anonymized cases of downtime and slow performance. Read this report and find out: Top 5 causes of downtime and poor performance across the infrastructure stack How machine learning and predictive analytics can prevent issues Steps you can take to boost performance and availability"
Tags : cloud, nimble storage, infrastructure
     Hewlett Packard Enterprise
By: Epicor     Published Date: Sep 20, 2017
When determining which investments to make in their technology infrastructure, organizations will often choose to make no changes at all. While this decision avoids short-term costs and business disruption, it often simply delays the inevitable—even making it worse. The cost of doing nothing is expensive in the long term. This report—from the independent researchers at Aberdeen Group—outlines the reasons distributors choose not to upgrade enterprise resource planning (ERP) software and cautions against this approach, supplying detailed research that illustrates the benefits of keeping your systems current. Download this report to learn how a new or improved ERP system can help get you the information you need to make informed decisions and act more efficiently, improving overall company performance.
Tags : erp software, enterprise resource planning software
     Epicor
By: Dell     Published Date: May 13, 2016
No matter your line of business, technology implemented four years ago is likely near its end of life and may be underperforming as more users and more strenuous workloads stretch your resources thin. Adding memory and upgrading processors won't provide the same benefits to your infrastructure as a consolidation and upgrade can. Read this research report to learn how upgrading to Dell's PowerEdge VRTX with Hyper-V virtualization, Microsoft Windows Server 2012 R2, and Microsoft SQL Server 2014 could reduce costs while delivering better performance than trying to maintain aging hardware and software.
Tags : 
     Dell
By: Oracle     Published Date: Mar 05, 2015
Learn the 20 key commerce metrics that you should be tracking to measure and optimize your commerce results. For each metric, you will learn what it means, why you should be tracking it, industry benchmarks. Download the eBook now.
Tags : oracle, commerce, empowerment, customer, e-commerce, content, marketing, retail ops
     Oracle
By: AWS     Published Date: Dec 15, 2017
Healthcare and Life Sciences organizations are using data to generate knowledge that helps them provide better patient care, enhances biopharma research and development, and streamlines operations across the product innovation and care delivery continuum. Next-Gen business intelligence (BI) solutions can help organizations reduce time-to-insight by aggregating and analyzing structured and unstructured data sets in real or near-real time. AWS and AWS Partner Network (APN) Partners offer technology solutions to help you gain data-driven insights to improve care, fuel innovation, and enhance business performance. In this webinar, you’ll hear from APN Partners Deloitte and hc1.com about their solutions, built on AWS, that enable Next-Gen BI in Healthcare and Life Sciences. Join this webinar to learn: How Healthcare and Life Sciences organizations are using cloud-based analytics to fuel innovation in patient care and biopharmaceutical product development. How AWS supports BI solutions f
Tags : 
     AWS
By: Hewlett Packard Enterprise     Published Date: Aug 23, 2018
"This research by Nimble Storage, a Hewlett Packard Enterprise Company, outlines the top five causes of application delays. The report analyzes more than 12,000 anonymized cases of downtime and slow performance. Read this report and find out: -Top 5 causes of downtime and poor performance across the infrastructure stack -How machine learning and predictive analytics can prevent issues -Steps you can take to boost performance and availability
Tags : 
     Hewlett Packard Enterprise
By: Dell and Nutanix     Published Date: Jan 16, 2018
Organizations increasingly require IT infrastructures that support the speed at which their businesses must operate through simplicity, efficiencies, agility, and strong performance. Hyperconverged infrastructure solutions, which enable organizations to minimize or nearly eliminate inefficiencies and complexity associated with maintaining storage and compute silos, have emerged as a strong potential solution for such organizations. IDC’s research demonstrates that organizations running workloads on Nutanix solutions such as Dell XC are benefiting are benefiting from cost and staff efficiencies, the ability to scale their infrastructure incrementally, very high resiliency, and strong application performance. This is helping interviewed Nutanix solutions customers better meet business challenges and has led many of them to establish plans for expanding their hyperconverged workload environment with Nutanix solutions.
Tags : technologies, infrastructure, data, social, mobile, cloud, nutanix
     Dell and Nutanix
By: Riverbed     Published Date: Jan 25, 2018
"When apps run efficiently, employees are more likely to use them.” The promise of unified communications (UC) is that it is supposed to increase efficiencies and make internal operations more seamless. But that only happens when it’s working properly. According to Robin Gareiss, president and founder of Nemertes Research, “Companies devote 33% more IT staff to managing IP telephony and 31% more to UC when they don’t use monitoring tools. The tools are instrumental to identifying, isolating, and resolving performance issues—and preventing them from happening again. When apps run efficiently, employees are more likely to use them.” That’s one reason 36% more people actually use UC in large companies that use monitoring tools. Join Robin Gareiss, president and founder, Nemertes Research and David Roberts, director of product management, Riverbed, as they explore the different approaches to monitoring UC—network probes vs endpoint telemetry—and why taking a combined approach helps you
Tags : 
     Riverbed
By: Limelight Networks     Published Date: Mar 02, 2018
Can your business afford to lose $9,000 per minute? According to the Ponemon Institute $9,000 is the average cost of an unplanned outage. In some cases the costs are much higher. The catalogue of cloud outages over recent years is well publicized and reads like a “who’s who” of the technology industry. It seems no one is immune. But when it comes to delivering digital content, downtime isn’t the only concern. Today a poor user experience can be just as damaging as an outage. According to Limelight research, 78% of people will stop watching an online video after it buffers three times, and the majority of people will not wait more than 5 seconds for a website to load. Organizations looking to deliver great digital experiences for their customers often choose to deliver that content using Content Delivery Networks (CDNs). Using multiple CDNs to deliver these digital content experiences promises even greater levels of availability and performance. But it brings with it a host of questi
Tags : content delivery network, cdn, multi-cdn, multiple cdns, web performance, web acceleration, digital content delivery, mobile delivery
     Limelight Networks
By: Workday Spain     Published Date: Sep 19, 2018
The annual performance review is in need of a technology-based update. In this global research study, learn how to transform the performance management process from an annual chore to a continuous value driver that better engages and retains employees.
Tags : 
     Workday Spain
By: Dell EMC     Published Date: May 11, 2016
No matter your line of business, technology implemented four years ago is likely near its end of life and may be underperforming as more users and more strenuous workloads stretch your resources thin. Adding memory and upgrading processors won't provide the same benefits to your infrastructure as a consolidation and upgrade can. Read this research report to learn how upgrading to Dell's PowerEdge VRTX with Hyper-V virtualization, Microsoft Windows Server 2012 R2, and Microsoft SQL Server 2014 could reduce costs while delivering better performance than trying to maintain aging hardware and software.
Tags : 
     Dell EMC
By: Dell EMC     Published Date: Nov 04, 2016
According to recent ESG research, 70% of IT respondents indicated they plan to invest in HCI over the next 24 months. IT planners are increasingly turning toward HyperConverged Infrastructure (HCI) solutions to simplify and speed up infrastructure deployments, ease day-to-day operational management, reduce costs, and increase IT speed and agility. HCI consists of a nodal-based architecture whereby all the required virtualized compute, storage, and networking assets are self-contained inside individual nodes. These nodes are, in effect, discrete virtualized computing resources “in a box.” However, they are typically grouped together to provide resiliency, high performance, and flexible resource pooling. And since HCI appliances can scale out to large configurations over time, these systems can provide businesses with investment protection and a simpler, more agile, and cost-effective way to deploy virtualized computing infrastructure. Read this paper to learn more.
Tags : data, software-defined, hyper-convergence, hdd, sddc
     Dell EMC
Start   Previous   1 2 3 4 5 6 7 8    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com