architect

Results 1 - 25 of 1789Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: Seagate     Published Date: Sep 30, 2015
In all things, but especially in high-performance computing (HPC), the details matter. In HPC, the most important detail is drive efficiency, especially when analyzing storage performance. This fundamental metric is easy to calculate and can have a huge impact on HPC storage performance. In this guide, we’ll look at why conventional HPC storage architectures do so poorly at drive-level throughput and what that means to an organization’s ability to realize the full potential of its HPC investments. We’ll also explain why ClusterStor™ from Seagate® excels in this arena.
Tags : 
     Seagate
By: Seagate     Published Date: Sep 30, 2015
In all things, but especially in high-performance computing (HPC), the details matter. In HPC, the most important detail is drive efficiency, especially when analyzing storage performance. This fundamental metric is easy to calculate and can have a huge impact on HPC storage performance. In this guide, we’ll look at why conventional HPC storage architectures do so poorly at drive-level throughput and what that means to an organization’s ability to realize the full potential of its HPC investments. We’ll also explain why ClusterStor™ from Seagate® excels in this arena.
Tags : 
     Seagate
By: InsideHPC Special Report     Published Date: Sep 04, 2013
The recently launched Mellanox Connect-IB™ InfiniBand adapter introduced a novel high-performance and scalable architecture for high-performance clusters. The architecture was designed from the ground up to provide high performance and scalability for the largest supercomputers in the world, today and in the future.
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Nov 14, 2014
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Value is migrating throughout the IT industry from hardware to software and services. High Performance Computing (HPC) is no exception. IT solution providers must position themselves to maximize their delivery of business value to their clients – particularly industrial customers who often use several applications that must be integrated in a business workflow. This requires systems and hardware vendors to invest in making their infrastructure “application ready”. With its Application Ready solutions, IBM is outflanking competitors in Technical Computing and fast-tracking the delivery of client business value by providing an expertly designed, tightly integrated and performance optimized architecture for several key industrial applications. These Application Ready solutions come with a complete high-performance cluster including servers, network, storage, operating system, management software, parallel file systems and other run time libraries, all with commercial-level solution s
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: IBM APAC     Published Date: May 14, 2019
Digital transformation (DX) continues to headline business initiatives as companies look to remain competitive in the rapidly changing IT landscape. Organizations are increasingly investing in and implementing next-generation applications and architectures such as software-defined IT and hybrid IT to drive higher levels of efficiency and agility. These modern technologies and architectures, however, also require organizations to evolve their underlying infrastructure to support new capabilities and demands. Recent strength in the server market, which continues to operate in a marketwide refresh cycle, illustrates the growing demand for new infrastructure. 1Q18 marked the server market's fifth consecutive quarter of both shipment and revenue growth and represented a fourth consecutive quarter of year-on-year ASP growth, as customers continue to replace aging server infrastructure with more powerful and efficient systems that leverage the latest platform developments from processor supp
Tags : 
     IBM APAC
By: Black Duck Software     Published Date: Jul 16, 2010
This paper is for IT development executives looking to gain control of open source software as part of a multi-source development process. You can gain significant management control over open source software use in your development organization. Today, many IT executives, enterprise architects, and development managers in leading companies have gained management control over the externally-sourced software used by their application development groups. Download this free paper to discover how.
Tags : open source, development, architects, application development, software development
     Black Duck Software
By: Group M_IBM Q2'19     Published Date: Apr 01, 2019
IBM Cloud Private for Data is an integrated data science, data engineering and app building platform built on top of IBM Cloud Private (ICP). The latter is intended to a) provide all the benefits of cloud computing but inside your firewall and b) provide a stepping-stone, should you want one, to broader (public) cloud deployments. Further, ICP has a micro-services architecture, which has additional benefits, which we will discuss. Going beyond this, ICP for Data itself is intended to provide an environment that will make it easier to implement datadriven processes and operations and, more particularly, to support both the development of AI and machine learning capabilities, and their deployment. This last point is important because there can easily be a disconnect Executive summary between data scientists (who often work for business departments) and the people (usually IT) who need to operationalise the work of those data scientists
Tags : 
     Group M_IBM Q2'19
By: Group M_IBM Q2'19     Published Date: Apr 02, 2019
There can be no doubt that the architecture for analytics has evolved over its 25-30 year history. Many recent innovations have had significant impacts on this architecture since the simple concept of a single repository of data called a data warehouse. First, the data warehouse appliance (DWA), along with the advent of the NoSQL revolution, selfservice analytics, and other trends, has had a dramatic impact on the traditional architecture. Second, the emergence of data science, realtime operational analytics, and self-service demands has certainly had a substantial effect on the analytical architecture.
Tags : 
     Group M_IBM Q2'19
By: Genesys     Published Date: Feb 12, 2019
Everyone says they’re “in the cloud,” but most technology leaders would agree that not all clouds are created equal. When evaluating a cloud contact centre solution for your business, it’s important to understand the difference between a true Cloud 2.0 application and traditional software, including which features to look for and why those features are important. Download this eBook and learn: How a true Cloud 2.0 model is built to provide levels of reliability, scalability, flexibility and security that that far exceed those of previous generations The benefits of utilising a platform built on microservices architecture How to take your business to the next level with a built to scale cloud contact centre platform
Tags : 
     Genesys
By: Bell Micro     Published Date: Jun 14, 2010
This paper discusses how the IBM XIV Storage System's revolutionary built-in virtualization architecture provides a way to drastically reduce the costs of managing storage systems.
Tags : bell micro, ibm xiv, storage system management, virtualization architecture
     Bell Micro
By: Bell Micro     Published Date: Jun 14, 2010
In this white paper, we describe the XIV snapshot architecture and explain its underlying advantages in terms of performance, ease of use, flexibility and reliability.
Tags : bell micro, hardware, storage management, vmware, ibm xiv
     Bell Micro
By: Forcepoint     Published Date: May 14, 2019
Keep Your Organization Up to Speed As your organization expands, you need a resilient, fast, and secure network — balancing all three is no easy task. A software-defined wide area network (SD-WAN) provides a competitive advantage over traditional service provider options and "hub and spoke" architectures. This is where a software-defined WAN (SD-WAN) comes to the rescue. And deploying it today can provide a quantifiable competitive advantage. Gartner estimates SD-WAN has less than 5% market share today, but predicts 25% of users will manage their WAN through software within two years. "Secure Enterprise SD-WAN for Dummies" guides you step-by-step how to manage and secure digital networks with SD-WAN, with instructions even the most novice networking professional can understand. Read “Secure Enterprise SD-WAN for Dummies” and gain a competitive edge today!
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
IDC Infobrief: How Security Solutions Enable Cloud Success Cloud applications have transformed how organizations conduct business, increasing productivity and reducing costs. However, moving to the cloud means critical data now flows outside the organization’s traditional security boundaries. Unsurprisingly, security concerns still rank as the number one reason for not using cloud services*. This IDC InfoBrief, sponsored by Forcepoint, outlines five key security considerations and best practices for safe cloud adoption, including: Multi-cloud adoption: exposing challenges for security architectures Cloud-first and GDPR implications Cloud application visibility and control Data management and DLP in the cloud Leveraging User and Entity Behavior Analytics (UEBA) for protection Read the eBook to find out the five information security lessons learned from transitioning to the cloud. *IDC’s 2017 CloudView Survey; IDC’s 2017 CloudImpact Survey
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
Keep Your Organization Up to Speed As your organization expands, you need a resilient, fast, and secure network — balancing all three is no easy task. A software-defined wide area network (SD-WAN) provides a competitive advantage over traditional service provider options and "hub and spoke" architectures. This is where a software-defined WAN (SD-WAN) comes to the rescue. And deploying it today can provide a quantifiable competitive advantage. Gartner estimates SD-WAN has less than 5% market share today, but predicts 25% of users will manage their WAN through software within two years. "Secure Enterprise SD-WAN for Dummies" guides you step-by-step how to manage and secure digital networks with SD-WAN, with instructions even the most novice networking professional can understand. Read “Secure Enterprise SD-WAN for Dummies” and gain a competitive edge today!
Tags : 
     Forcepoint
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com