network

Results 1 - 25 of 4354Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Sep 02, 2014
This brief webcast will cover the new and enhanced capabilities of Elastic Storage 4.1, including native encryption and secure erase, flash-accelerated performance, network performance monitoring, global data sharing, NFS data migration and more. IBM GPFS (Elastic storage) may be the key to improving your organization's effectiveness and can help define a clear data management strategy for future data growth and support.
Tags : ibm, elastic storage
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Feb 13, 2015
Value is migrating throughout the IT industry from hardware to software and services. High Performance Computing (HPC) is no exception. IT solution providers must position themselves to maximize their delivery of business value to their clients – particularly industrial customers who often use several applications that must be integrated in a business workflow. This requires systems and hardware vendors to invest in making their infrastructure “application ready”. With its Application Ready solutions, IBM is outflanking competitors in Technical Computing and fast-tracking the delivery of client business value by providing an expertly designed, tightly integrated and performance optimized architecture for several key industrial applications. These Application Ready solutions come with a complete high-performance cluster including servers, network, storage, operating system, management software, parallel file systems and other run time libraries, all with commercial-level solution s
Tags : 
     IBM
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
Parallelism and direct memory access enable faster and more accurate SAS analytics using Remote Direct Memory Access based analytics and fast, scalable,external disk systems with massively parallel access to data, SAS analytics driven organizations can deliver timely and accurate execution for data intensive workfl ows such as risk management, while incorporating larger datasets than using traditional NAS.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: Cisco EMEA Tier 3 ABM     Published Date: May 15, 2018
Nell'economia globale di oggi, le reti delle piccole imprese devono permettere di ridurre le spese operative. Devono consentire di reagire rapidamente ai cambiamenti del mercato e delle esigenze dei clienti. Devono essere pronte per il futuro. Le tecnologie di routing e switching possono avere un impatto positivo sulle spese. Condividere risorse come stampanti, accesso a Internet, server e servizi permette di risparmiare. Inoltre, una rete affidabile può evolversi con l'azienda, in modo che non sia necessario sostituirla quando le esigenze crescono.
Tags : 
     Cisco EMEA Tier 3 ABM
By: Cisco EMEA Tier 3 ABM     Published Date: May 15, 2018
Miercom è stata incaricata da Cisco Systems di configurare, far funzionare e quindi valutare in modo indipendente le infrastrutture di rete per campus wireless e cablate di Cisco Systems e Huawei Technologies. I prodotti di ciascun fornitore sono stati configurati e implementati rigorosamente secondo i progetti consigliati dai fornitori e utilizzando il loro rispettivo software per la gestione, il controllo, la configurazione e il monitoraggio della rete in tutto il campus.
Tags : 
     Cisco EMEA Tier 3 ABM
By: Hewlett Packard Enterprise     Published Date: Apr 20, 2018
In an innovation-powered economy, ideas need to travel at the speed of thought. Yet even as our ability to communicate across companies and time zones grows rapidly, people remain frustrated by downtime and unanticipated delays across the increasingly complex grid of cloud-based infrastructure, data networks, storage systems, and servers that power our work.
Tags : 
     Hewlett Packard Enterprise
By: Juniper Networks     Published Date: May 04, 2018
Business leaders are eager to leverage new technologies, and IT leaders can't afford to fall behind. Hybrid IT environments take advantage of private and public clouds but need enhanced security, automation, orchestration, and agility.
Tags : 
     Juniper Networks
By: Juniper Networks     Published Date: May 04, 2018
The role of IT is changing from a traditional focus on cost-efficient enablement to a more strategic contribution. Total Cost of Ownership (TCO), while important, is being surpassed by a growing focus on automation and orchestration that is needed to fulfill enterprise demands for security, agility, and innovation. The cloud-grade enterprise network – spanning the campus, data center, and branch – must be able to respond to rapid changes in business, growing reliance on hybrid cloud architectures, and the needs of users and customers.
Tags : 
     Juniper Networks
By: Juniper Networks     Published Date: May 04, 2018
PwC surveyed 235 IT leaders and interviewed another 35 from large, medium, and small enterprises to understand the buying decisions of IT leaders, across a wide variety of networking components (i.e., switches, SDN, and infrastructure monitoring solutions) within the data center. This report highlights the survey and interview insights to help Enterprise IT leaders understand the trends and implications of multi cloud environments.
Tags : 
     Juniper Networks
By: Dell PC Lifecycle     Published Date: May 15, 2018
Windows Server 2016 is an important release in enabling IT to deliver on the promise of the third platform. It provides a path to a seamless, integrated cloud environment—incorporating public, private and hybrid models—with the software-defined data center as the hub. In migrating to this next-generation data center model, it is essential that IT leaders choose the right partner for the compute platform, as well as storage, networking and systems management
Tags : 
     Dell PC Lifecycle
By: Google     Published Date: Apr 26, 2018
No one in today’s highly connected world is exempt from security threats like phishing, ransomware, or denial-of-service (DoS) attacks. Certainly not Google. Google operates seven services with more than one billion active users each (including Google Search, YouTube, Maps, and Gmail). We see every type of attack, bad software, and bad actors—multiple times a day—and we’re proud of what our people, processes, and technology do to stop them. Google has published more than 160 academic research papers on computer security, privacy, and abuse prevention and has privately warned other software companies of weaknesses discovered in their systems. Within Google, we enforce a zero-trust security model, which monitors every device on the internal network.
Tags : 
     Google
By: Centrify Corporation     Published Date: May 22, 2018
A significant paradigm shift occurred in the last few years. Much like other technological shifts of the last decade — when cloud computing changed the way we do business, agile changed the way we develop software and Amazon changed the way we shop — Zero Trust presents us with a new paradigm in how we secure our organizations, our data and our employees. While difficult to identify the precise tipping point, one thing is certain: what were once extraordinarily high-profile, damaging breaches are no longer extraordinary. In just the last 18 months, Yahoo, Accenture, HBO, Verizon, Uber, Equifax, Deloitte, the U.S. SEC, the RNC, the DNC, the OPM, HP, Oracle and a profusion of attacks aimed at the SMB market have all proven that every organization — public or private — is susceptible. The epiphany behind the paradigm shift is clear: Widely-accepted security approaches based on bolstering a trusted network do not work. And they never will. Especially when businesses are dealing with skill
Tags : 
     Centrify Corporation
By: MobileIron     Published Date: May 07, 2018
The types of threats targeting enterprises are vastly different than they were just a couple of decades ago. Today, successful enterprise attacks are rarely executed by the “lone wolf” hacker and instead come from highly sophisticated and professional cybercriminal networks. These networks are driven by the profitability of ransomware and the sale of confidential consumer data, intellectual property, government intelligence, and other valuable data. While traditional PC-based antivirus solutions can offer some protection against these attacks, organizations need highly adaptive and much faster mobile threat defense (MTD) for enterprise devices.
Tags : mobile, threat, detection, machine, ransomware, confidential, networks
     MobileIron
By: Datastax     Published Date: May 14, 2018
"What’s In The Report? The Forrester Graph Database Vendor Landscape discusses the expanding graph uses cases, new and emerging graph solutions, the two approaches to graph, how graph databases are able to provide penetrating insights using deep data relationships, and the top 10 graph vendors in the market today Download The Report If You: -Want to know how graph databases work to provide quick, deep, actionable insights that help with everything from fraud to personalization to go-to-market acceleration, without having to write code or spend operating budget on data scientists. -Learn the new graph uses cases, including 360-degree views, fraud detection, recommendation engines, and social networking. -Learn about the top 10 graph databases and why DSE Graph continues to gain momentum with customers who like its ability to scale out in multi-data-center, multi-cloud, and hybrid environments, as well as visual operations, search, and advanced security."
Tags : 
     Datastax
By: Sangoma     Published Date: Jan 30, 2013
IP communications across multiple, sometimes untrusted, networks needs to be normalized, managed and secured. As part of the most cost-effective, easiest to manage line of Session Border Controllers on the market. Read to learn how they can help you.
Tags : ip communications, sangoma, ip firewall, datasheet, sangoma, session border control, carrier appliance, networking, enterprise applications
     Sangoma
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com