network

Results 1 - 25 of 4395Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: IBM     Published Date: Sep 02, 2014
This brief webcast will cover the new and enhanced capabilities of Elastic Storage 4.1, including native encryption and secure erase, flash-accelerated performance, network performance monitoring, global data sharing, NFS data migration and more. IBM GPFS (Elastic storage) may be the key to improving your organization's effectiveness and can help define a clear data management strategy for future data growth and support.
Tags : ibm, elastic storage
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Feb 13, 2015
Value is migrating throughout the IT industry from hardware to software and services. High Performance Computing (HPC) is no exception. IT solution providers must position themselves to maximize their delivery of business value to their clients – particularly industrial customers who often use several applications that must be integrated in a business workflow. This requires systems and hardware vendors to invest in making their infrastructure “application ready”. With its Application Ready solutions, IBM is outflanking competitors in Technical Computing and fast-tracking the delivery of client business value by providing an expertly designed, tightly integrated and performance optimized architecture for several key industrial applications. These Application Ready solutions come with a complete high-performance cluster including servers, network, storage, operating system, management software, parallel file systems and other run time libraries, all with commercial-level solution s
Tags : 
     IBM
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
Parallelism and direct memory access enable faster and more accurate SAS analytics using Remote Direct Memory Access based analytics and fast, scalable,external disk systems with massively parallel access to data, SAS analytics driven organizations can deliver timely and accurate execution for data intensive workfl ows such as risk management, while incorporating larger datasets than using traditional NAS.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: Cisco EMEA Tier 3 ABM     Published Date: Jun 05, 2018
Miercom è stata incaricata da Cisco Systems di configurare, far funzionare e quindi valutare in modo indipendente le infrastrutture di rete per campus wireless e cablate di Cisco Systems e Huawei Technologies. I prodotti di ciascun fornitore sono stati configurati e implementati rigorosamente secondo i progetti consigliati dai fornitori e utilizzando il loro rispettivo software per la gestione, il controllo, la configurazione e il monitoraggio della rete in tutto il campus.
Tags : 
     Cisco EMEA Tier 3 ABM
By: Hewlett Packard Enterprise     Published Date: Apr 20, 2018
For midsize firms around the world with 100 to 999 employees, advanced technology plays an increasingly important role in business success. Companies have been adding cloud resources to supplement on-premise server, storage, and networking capabilities. At the same time, growth of mobile and remote workers is also changing how companies need to support workers to allow them to be as productive as possible.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Apr 20, 2018
In an innovation-powered economy, ideas need to travel at the speed of thought. Yet even as our ability to communicate across companies and time zones grows rapidly, people remain frustrated by downtime and unanticipated delays across the increasingly complex grid of cloud-based infrastructure, data networks, storage systems, and servers that power our work.
Tags : 
     Hewlett Packard Enterprise
By: Akamai Technologies     Published Date: Jun 14, 2018
The moats and castle approach is an antiquated, yet a common way, of protecting the enterprise network. This paper describes a new way to protect the enterprise which hides applications from the Internet, outside of the firewall, with zero open ports and a minimal attack surface.
Tags : security, enterprise security, eaa
     Akamai Technologies
By: Akamai Technologies     Published Date: Jun 14, 2018
IT complexity has rapidly grown with more applications, users, and infrastructure needed to service the enterprise. Traditional remote access models weren’t built for business models of today and are unable to keep up with the pace of change. Read this paper to understand how remote access can be redefined to remove the complexity, meet end-user expectation and mitigate network security risks.
Tags : security, multi-factor authentication, network security risks
     Akamai Technologies
By: Akamai Technologies     Published Date: Jun 14, 2018
"High-profile cyber attacks seem to occur almost daily in recent years. Clearly security threats are persistent and growing. While many organizations have adopted a defense-in-depth strategy — utilizing anti-virus protection, firewalls, intruder prevention systems, sandboxing, and secure web gateways — most IT departments still fail to explicitly protect the Domain Name System (DNS). This oversight leaves a massive gap in network defenses. But this infrastructure doesn’t have to be a vulnerability. Solutions that protect recursive DNS (rDNS) can serve as a simple and effective security control point for end users and devices on your network. Read this white paper to learn more about how rDNS is putting your enterprise at risk, why you need a security checkpoint at this infrastructural layer, how rDNS security solutio Read 5 Reasons Enterprises Need a New Access Model to learn about the fundamental changes enterprises need to make when providing access to their private applications.
Tags : rdns, dns, anti-virus, security, network defense
     Akamai Technologies
By: Akamai Technologies     Published Date: Jun 14, 2018
Cybercriminals are evolving. Increasingly, they are capitalizing on the open and unprotected nature of the Domain Name System (DNS) to launch damaging phishing, malware, and ransomware attacks. How are you proactively protecting your network and users from these targeted threats? Here are five things to ask yourself as you consider a DNS security solution for your company.
Tags : dns, phishing, malware, ransomware, security
     Akamai Technologies
By: Akamai Technologies     Published Date: Jun 14, 2018
"Existing security controls are outmatched — at best static and reactive. Current layers likely aren’t protecting you against all attack vectors, like the vulnerable back door that is recursive DNS. And security mechanisms that frustrate, impede, or disallow legitimate users, devices, or applications will have low adoption rates and/or will curtail productivity. Benign users may even circumvent these processes, further undermining your corporate security posture and creating more gaps in your defense-in- depth strategy. One of the many use cases associated with a zero trust security strategy is protecting your network — and most importantly, your data — from malware. "
Tags : dns, rdns, security, zero trust security, malware, data, network security
     Akamai Technologies
By: Akamai Technologies     Published Date: Jun 14, 2018
"A zero trust security and access model is the solution: Every machine, user, and server should be untrusted until proven otherwise. But how do you achieve zero trust? Read this white paper authored by Akamai’s CTO, Charlie Gero, to learn how to transition to a perimeter-less world in an incredibly easy way, with steps including: • The zero trust method of proof • The vision behind Google BeyondCorpTM • Analysis of application access vs. network access • How to deploy user grouping methodology • Guidance for application rollout stages 1-8"
Tags : security, perimeter security, zero trust, cloud, enterprise security
     Akamai Technologies
By: Butler Technologies     Published Date: Jul 02, 2018
The Tenth Annual State of the Network Global Study focuses a lens on the network team’s role in security investigations. Results indicate that 88 percent of network teams are now spending time on security issues. In fact, out of 1,035 respondents, nearly 3 out of 4 spend up to 10 hours per week working exclusively on these types of problems - in addition to managing network upgrades, SDN, cloud, and big data initiatives. When it comes to technology adoption, both cloud and 100 GbE deployment continue to grow aggressively. VoIP adoption is closing in on 60 percent and software-defined networking (SDN) is projected to cross the halfway mark, indicating compounding network complexity amidst the ongoing struggle to ID security threats. With growth comes change and some trends identified in this year’s survey include a rise in email and browser-based malware attacks (63 percent) and an increase in sophistication (52 percent). Nearly 1 in 3 also report a surge in DDoS attacks, signaling a ne
Tags : 
     Butler Technologies
By: Butler Technologies     Published Date: Jul 02, 2018
VoIP’s extreme sensitivity to delay and packet loss compared to other network applications such as web and e-mail services, presents a real challenge. A basic understanding of VoIP traffic and of the quality metrics provided by VoIP monitoring tools will help keep your network running smoothly. Follow our 10 VoIP best practices to ensure quality service for your network.
Tags : 
     Butler Technologies
By: Butler Technologies     Published Date: Jul 02, 2018
Increasingly complex networks, require more than a one-size-fitsall approach to ensuring adequate performance and data integrity. In addition to the garden-variety performance issues such as slow applications, increased bandwidth requirements, and lack of visibility into cloud resources, there is also the strong likelihood of a malicious attack. While many security solutions like firewalls and intrusion detection systems (IDS) work to prevent security incidents, none are 100 percent effective. However, there are proactive measures that any IT team can implement now that can help ensure that a successful breach is found quickly, effectively remediated, and that evidential data is available in the event of civil and/or criminal proceedings.
Tags : 
     Butler Technologies
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com