E&O Opportunities

Subscribe to E&O Opportunities feed
XSEDE
Updated: 7 min 39 sec ago

Western Digital begins production of the world's tallest 3D NAND 'skyscraper'

February 9, 2017

Western Digital today announced that it has kicked off production of the industry’s densest 3D NAND flash chips, which stack 64 layers atop another and enable three bits of data to be stored in each cell. The 3D NAND flash chips are based on a vertical stacking or 3D technology that Western Digital and partner Toshiba call BiCS (Bit Cost Scaling). WD has launched its pilot production of its first 512 gigabit (Gb) 3D NAND chip based on the 64-layer NAND flash technology. In the same way a skyscraper allows for greater density in a smaller footprint, stacking NAND flash cells—versus planar or 2D memory -enables manufacturers to increase density, which enables lower cost per gigabyte of capacity. The technology also increases data reliability and improves the speed of solid-state memory. Read more at http://www.pcworld.com/article/3166099/storage/western-digital-begins-production-of-the-worlds-tallest-3d-nand-skyscraper.html

Afnan Abdul Rehman 2017-02-09T20:21:34Z

Guilty SPARC: Oracle euthanizes Solaris 12, expunging it from roadmap

February 9, 2017

Rumors have been circulating since late last year that Oracle was planning to kill development of the Solaris operating system, with major layoffs coming to the operating system's development team. Others speculated that future versions of the Unix platform Oracle acquired with Sun Microsystems would be designed for the cloud and built for the Intel platform only and that the SPARC processor line would meet its demise. The good news, based on a recently released Oracle roadmap for the SPARC platform, is that both Solaris and SPARC appear to have a future. The bad news is that the next major version of Solaris—Solaris 12— has apparently been canceled, as it has disappeared from the roadmap. Instead, it's been replaced with "Solaris 11 next"—and that version is apparently the only update planned for the operating system through 2021. Read more at https://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/

Afnan Abdul Rehman 2017-02-09T20:20:18Z

Scientists Come Up with Blueprint for Supersized Quantum Computer

February 9, 2017

Quantum computing is back in the news – this time courtesy of a research team that has published a blueprint for what they are calling “the most powerful computer on Earth.” According to the press release issued on Wednesday, a quantum computer based on this blueprint can be built with currently available technologies and would have nearly unlimited computational power. The announcement sums up the significance of the new design thusly: “Once built, the computer’s capabilities mean it would have the potential to answer many questions in science; create new, lifesaving medicines; solve the most mind-boggling scientific problems; unravel the yet unknown mysteries of the furthest reaches of deepest space; and solve some problems that an ordinary computer would take billions of years to compute.” Read more at https://www.top500.org/news/scientists-come-up-with-blueprint-for-supersized-quantum-computer/

Afnan Abdul Rehman 2017-02-09T20:18:18Z

Modifying the 'middle end' of a popular compiler yields more-efficient parallel programs

February 9, 2017

Compilers are programs that convert computer code written in high-level languages intelligible to humans into low-level instructions executable by machines. But there's more than one way to implement a given computation, and modern compilers extensively analyze the code they process, trying to deduce the implementations that will maximize the efficiency of the resulting software. Code explicitly written to take advantage of parallel computing, however, usually loses the benefit of compilers' optimization strategies. That's because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren't sure how to interpret the new code, so they don't try to improve its performance. At the Association for Computing Machinery's Symposium on Principles and Practice of Parallel Programming next week, researchers from MIT's Computer Science and Artificial Intelligence Laboratory will present a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution. As a consequence, says Charles E. Leiserson, the Edwin Sibley Webster Professor in Electrical Engineering and Computer Science at MIT, the compiler "now optimizes parallel code better than any commercial or open-source compiler, and it also compiles where some of these other compilers don't." That improvement comes purely from optimization strategies that were already part of the compiler the researchers modified, which was designed to compile conventional, serial programs. The researchers' approach should also make it much more straightforward to add optimizations specifically tailored to parallel programs. And that will be crucial as computer chips add more and more "cores," or parallel processing units, in the years ahead. Read more at https://phys.org/news/2017-01-middle-popular-yields-more-efficient-parallel.html

Afnan Abdul Rehman 2017-02-09T20:15:25Z

Apply Now for the SC17 Student Cluster Competition - Deadline: April 7, 2017

February 9, 2017

SC17 is excited to hold another nail-biting Student Cluster Competition, or SCC, now in its eleventh year, as an opportunity to showcase student expertise in a friendly yet spirited competition. Held as part of SC17’s Students@SC, the Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community. Over the years, the competition has drawn teams from around the United States and around the world. The Student Cluster Competition is an HPC multi-disciplinary experience integrated within the HPC community’s biggest gathering, the Supercomputing Conference. The competition is a microcosm of a modern HPC center that teaches and inspires students to pursue careers in the field. It demonstrates the breadth of skills, technologies and science that it takes to build, maintain and utilize a supercomputer. In this real-time, non-stop, 48-hour challenge, teams of undergraduate and/or high school students assemble a small cluster on the exhibit floor and race to complete a real-world workload across a series of applications and impress HPC industry judges. Learn more at http://insidehpc.com/2017/02/apply-now-sc17-student-cluster-competition/

Afnan Abdul Rehman 2017-02-09T20:13:45Z

GPU Technology Conference - May 8-11, 2017

February 9, 2017

The GTC conference agenda has been designed to encourage compelling conversations between professionals across many industries, from automotive to big data analytics and manufacturing to energy. Further tracks on robotics, professional visualization and virtualization will offer you the chance to explore some of the other technology mega-trends that will shape your work in the years to come. GTC and the global GTC event series offer valuable training and a showcase of the most vital work in the computing industry today – including artificial intelligence and deep learning, healthcare, virtual reality, accelerated analytics, and self-driving cars. Learn more at http://www.gputechconf.com/

Afnan Abdul Rehman 2017-02-09T20:10:30Z

Call for Papers: Semantic Web/Cloud Information and Services Discovery and Management (SWISM 2017) - Deadline: Feb 28, 2017

February 9, 2017

The Semantic Web/Cloud Information and Services Discovery and Management brings together scientists, engineers, computer users, and students to exchange and share their experiences, new ideas, and research results about all aspects (theory, applications and tools) of intelligent methods applied to Web and Cloud based systems, and to discuss the practical challenges encountered and the solutions adopted. Read more at http://parsec.unina2.it/~workshops/ocs/index.php/swism/swism2017

Afnan Abdul Rehman 2017-02-09T20:00:27Z

Call for Papers: International Workshop on Big Data Analytics (BigDAW 2017) - Deadline: February 24, 2017

February 9, 2017

Managing and processing large volumes of data, or “Big Data”, and gaining meaningful insights is a significant challenge facing the distributed computing community; as a consequence, many businesses are demanding large scale streaming data analytics. This has significant impact in a wide range of domains including health care, bio-medical research, Internet search, finance and business informatics, and scientific computing. Despite considerable advancements on high performance, large storage, and high computation power, there are challenges in identifying, clustering, classifying, and interpreting of a large spectrum of information. The purpose of this workshop is to provide a fertile ground for collaboration between research institutions and industries and in analytics, machine learning, and high performance computing. Read more at https://www.zurich.ibm.com/BigDataAnalytics/

Afnan Abdul Rehman 2017-02-09T19:58:21Z

Call for Proposals: High Performance Computing Systems and Applications (HPCS 2017) - Deadline: February 21, 2017

February 9, 2017

You are cordially invited to participate in this international event through paper submission, a workshop or a special session organization, a tutorial, a demo, a poster, an exhibit, a panel discussion, a doctoral dissertation, whichever sounds more appropriate and convenient to you. The conference will include invited presentations by experts from academia, industry, and government as well as contributed paper presentations describing original work on the current state of research in high performance and large scale computing systems, their design, performance and use, their use in modeling and simulation, and their applications. There will also be tutorial sessions, symposia, workshops, special sessions, demos, posters, panel discussions, doctoral colloquium, and exhibits. Conference sponsorships are welcomed. Read more at http://hpcs2017.cisedu.info/

Afnan Abdul Rehman 2017-02-09T19:57:11Z

SDSC’s ‘Comet’ Supercomputer Surpasses ‘10,000 Users’ Milestone

February 9, 2017

Comet, the petascale supercomputer at the San Diego Supercomputer Center (SDSC), an Organized Research Unit of UC San Diego, has easily surpassed its target of serving at least 10,000 researchers across a diverse range of science disciplines, from astrophysics to redrawing the “tree of life”. In fact, about 15,000 users have used Comet to run science gateways jobs alone since the system went into production less than two years ago. A science gateway is a community-developed set of tools, applications, and data services and collections that are integrated through a web-based portal or suite of applications. Another 2,600 users have accessed the high-performance computing (HPC) resource via traditional runs. The target was established by SDSC as part of its cooperative agreement with the National Science Foundation (NSF), which awarded funding for Comet in late 2013. “Comet was designed to meet the needs of what is often referred to as the ‘long tail’ of science – the idea that the large number of modest-sized computationally-based research projects represent, in aggregate, a tremendous amount of research that can yield scientific advances and discovery,” said SDSC Director Michael Norman, principal investigator for the Comet project. Learn more at http://www.sdsc.edu/News%20Items/PR20170201_Comet_10k.html

Afnan Abdul Rehman 2017-02-09T19:55:03Z

When Data’s Deep, Dark Places need to be Illuminated

February 9, 2017

Much of the data of the World Wide Web hides like an iceberg below the surface. The so-called 'deep web' has been estimated to be 500 times bigger than the 'surface web' seen through search engines like Google. For scientists and others, the deep web holds important computer code and its licensing agreements. Nestled further inside the deep web, one finds the 'dark web,' a place where images and video are used by traders in illicit drugs, weapons, and human trafficking. A new data-intensive supercomputer called Wrangler is helping researchers obtain meaningful answers from the hidden data of the public web. The Wrangler supercomputer got its start in response to the question, can a computer be built to handle massive amounts of I/O (input and output)? The National Science Foundation (NSF) in 2013 got behind this effort and awarded the Texas Advanced Computing Center (TACC), Indiana University, and the University of Chicago $11.2 million to build a first-of-its-kind data-intensive supercomputer. Wrangler's 600 terabytes of lightning-fast flash storage enabled the speedy reads and writes of files needed to fly past big data bottlenecks that can slow down even the fastest computers. It was built to work in tandem with number crunchers such as TACC's Stampede, which in 2013 was the sixth fastest computer in the world. While Wrangler was being built, a separate project came together headed by the Defense Advanced Research Projects Agency (DARPA) of the U.S. Department of Defense. Back in 1969, DARPA had built the ARPANET, which eventually grew to become the Internet, as a way to exchange files and share information. In 2014, DARPA wanted something new - a search engine for the deep web. They were motivated to uncover the deep web's hidden and illegal activity, according to Chris Mattmann, chief architect in the Instrument and Science Data Systems Section of the NASA Jet Propulsion Laboratory (JPL) at the California Institute of Technology. Learn more at https://www.tacc.utexas.edu/-/preventing-blood-clots-with-a-new-metric-for-heart-function

Afnan Abdul Rehman 2017-02-09T19:54:07Z

NCAR Launches Five-Petaflop Supercomputer

February 9, 2017

The National Center for Atmospheric Research (NCAR) has begun operation of Cheyenne, a 5.34-petaflop supercomputer that will support a range of research related to weather, climate, and other Earth sciences. The system is currently ranked as the 20th most powerful system in the world, with a Linpack mark of 4.79 petaflops. Cheyenne was built by SGI, which is now a part of Hewlett Packard Enterprise. The hardware consists of Intel “Broadwell” Xeon processors (18-core 2.3GHz E5-2697v4) and Mellanox EDR InfiniBand. The system encompasses more than four thousand dual-socket nodes, one-fifth of which are equipped with 128GB of memory, with the remaining four-fifths containing 64GB. Total memory capacity is 313 terabytes. External data storage consists of 20 petabytes of DDN’s SFA14KX systems, expandable to 40 petabytes. Aggregate I/O bandwidth is 200 GB per second. Forty-eight 800GB SSD drives are included to speed metadata access. The storage system is overlaid by IBM’s Spectrum Scale parallel file system (formerly GPFS). Cheyenne’s storage will be integrated in the center’s existing GLADE central disk resource, providing access to more about 37 petabytes of capacity. Learn more at https://www.top500.org/news/ncar-launches-five-petaflop-supercomputer/

Afnan Abdul Rehman 2017-02-09T19:52:57Z

NVIDIA Rolls Out New Quadro Pascal GPUs

February 9, 2017

Today Nvidia introduced a range of Quadro GPUs based on its Pascal architecture. The new GPUs “transform desktop workstations into supercomputers with breakthrough capabilities for professional workflows across many industries.” Workflows in design, engineering and other areas are evolving rapidly to meet the exponential growth in data size and complexity that comes with photorealism, virtual reality and deep learning technologies. To tap into these opportunities, the new NVIDIA Quadro Pascal-based lineup provides an enterprise-grade visual computing platform that streamlines design and simulation workflows with up to twice the performance of the previous generation, and ultra-fast memory. The new cards complete the entire NVIDIA Quadro Pascal lineup including the previously announced P6000, P5000 and mobile GPUs. The entire NVIDIA Quadro Pascal lineup supports the latest NVIDIA CUDA 8 compute platform providing developers access to powerful new Pascal features in developer tools, performance enhancements and new libraries including nvGraph. The new NVIDIA Quadro products will be available starting in March from leading workstation OEMs, including Dell, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific. Learn more at http://insidehpc.com/2017/02/nvidia-rolls-new-quadro-pascal-gpus/

Afnan Abdul Rehman 2017-02-09T19:52:01Z

Supermicro Deploys 30,000+ MicroBlade Servers to Enable One of the World’s Highest Efficiency Datacenters

February 9, 2017

Super Micro Computer, Inc., a global leader in compute, storage and networking technologies including green computing, has announced deployment of its disaggregated MicroBlade systems at one of the world’s highest density and energy efficient data centers. A technology-leading Fortune 100 company has deployed over 30,000 Supermicro MicroBlade servers, at its Silicon Valley data center facility with a Power Use Effectiveness (PUE) of 1.06, to support the company’s growing compute needs. Compared to a traditional data center running at 1.49 PUE, or more, the new datacenter achieves an 88 percent improvement in overall energy efficiency. When the build out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire datacenter. The Supermicro MicroBlade system represents an entirely new type of computing platform. It is a powerful and flexible extreme-density 3U or 6U all-in-one total system that features 14 or 28 hot-swappable MicroBlade Server blades. The system delivers 86% improvement in power/cooling efficiency with common shared infrastructure, 56 percent system density improvement and lower initial investment versus 1U servers. The solution has 280 Intel Xeon processor-servers per rack and achieves 45 percent to-65 percent CAPEX savings per refresh cycle with a disaggregated rack scale design. Learn more at https://www.hpcwire.com/off-the-wire/supermicro-deploys-30000-microblade-servers-enable-one-worlds-highest-efficiency-datacenters/

Afnan Abdul Rehman 2017-02-09T19:49:42Z

Artificial Intelligence Is About to Conquer Poker, But Not Without Human Help

January 24, 2017

As Friday night became Saturday morning, Dong Kim sounded defeated. Kim is a high-stakes poker player who specializes in no-limit Texas Hold ‘Em. The 28-year-old Korean-American typically matches wits with other top players on high-stakes internet sites or at the big Las Vegas casinos. But this month, he’s in Pittsburgh, playing poker against an artificially intelligent machine designed by two computer scientists at Carnegie Mellon. No computer has ever beaten the top players at no-limit Texas Hold ‘Em, a particularly complex game of cards that serves as the main event at the World Series of Poker. Nearly two years ago, Kim was among the players who defeated an earlier incarnation of the AI at the same casino. But this time is different. Late Friday night, just ten days into this twenty-day contest, Kim told me that he and his fellow humans have no real chance of winning. “I didn’t realize how good it was until today. I felt like I was playing against someone who was cheating, like it could see my cards,” he said after returning to his hotel room to prep for the next day. “I’m not accusing it of cheating. It was just that good.” The machine is called Libratus—a Latin word meaning balanced—and Kim says the name is an apt description of the machine’s play. “It does a little bit of everything,” he says. Read more at https://www.wired.com/2017/01/ai-conquer-poker-not-without-human-help/

Afnan Abdul Rehman 2017-01-24T21:23:11Z

China to develop prototype super, super computer in 2017

January 24, 2017

China plans to develop a prototype exascale computer by the end of the year, state media said Tuesday, as it seeks to win a global race to be the first to build a machine capable of a billion, billion calculations per second. If successful, the achievement would cement its place as a leading power in the world of supercomputing. The Asian giant built the world's fastest supercomputer, the Sunway TaihuLight machine, in June last year, which was twice as fast as the previous number one. It used only locally made microchips, making it the first time a country has taken the top spot without using US technology. Exascale computers are even more powerful, and can execute at least one quintillion (a billion billion) calculations per second. Though a prototype was in the pipeline, a complete version of such a machine would take a few more years to complete, Xinhua news agency cited Zhang Ting, application engineer at the National Supercomputer Center in the port city of Tianjin, as saying. "A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country's first petaflop computer Tianhe-1, recognized as the world's fastest in 2010," said Zhang. The exascale computer could have applications in big data and cloud computing work, he added, noting that its prototype would lead the world in data transmission efficiency as well as calculation speed. Read more at https://phys.org/news/2017-01-china-prototype-super.html

Afnan Abdul Rehman 2017-01-24T21:22:07Z

MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing

January 24, 2017

The introduction of multicore and manycore processors capable of handling highly parallel workflows is changing the face of high performance computing (HPC). Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities. Here at MIT Lincoln Laboratory with its recently opened Supercomputer Center, the picture is a little different. We have been preparing for these specific changes over the past 15 years and conducting what we call “interactive supercomputing” since the 1950s. It has been clear from the start that data analysis is the major focus for most of our users; and that certain combinations of mathematics and processors will be beneficial to them. Read more at http://insidehpc.com/2017/01/mit-lincoln-laboratory-takes-mystery-supercomputing/

Afnan Abdul Rehman 2017-01-24T21:12:22Z

Applications for Two PRACE Summer Activities Are Now Being Accepted

January 24, 2017

Both activities are expense-paid programs and will allow participants to travel and stay at a hosting location and learn about HPC: The 2017 International Summer School on HPC Challenges in Computational Sciences as well as The PRACE Summer of HPC 2017 program. The summer school is sponsored by Compute/Calcul Canada, the Extreme Science and Engineering Discovery Environment (XSEDE), the Partnership for Advanced Computing in Europe (PRACE) and the RIKEN Advanced Insti­tute for Computational Science (RIKEN AICS). Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the eighth International Summer School on HPC Challenges in Computational Sciences, to be held June 25 – 30 2017, in Boulder, Colorado, United States of America. The PRACE Summer of HPC is a PRACE outreach and training program that offers summer placements at top HPC centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results. Early-stage postgraduate and late-stage undergraduate students are invited to apply for the PRACE Summer of HPC 2017 program, to be held in July & August 2017. Consisting of a training week and two months on placement at top HPC centers around Europe, the program affords participants the opportunity to learn and share more about PRACE and HPC, and includes accommodation, a stipend and travel to their HPC center placement. Learn more at https://www.hpcwire.com/off-the-wire/applications-two-prace-summer-activities-now-accepted/

Afnan Abdul Rehman 2017-01-24T21:11:13Z

Caltech Upgrading Demo Cluster with Intel Xeon-Phi x200 Processor

January 24, 2017

Nor-Tech reports that Caltech is upgrading its Nor-Tech demo cluster with Intel Xeon Phi. The demo cluster is a no-cost, no-strings opportunity for current and prospective clients to test-drive simulation applications on a cutting-edge Nor-Tech HPC equipped with Intel Xeon Phi and other high-demand platforms installed and configured. Users can also integrate their existing platforms into the demo cluster. Integrating Nor-Tech clusters with bootable Intel Xeon Phi processors eliminates node bottlenecks, simplifies code modernization, and builds on a power-efficient structure. The bootable Intel Xeon Phi x86 CPU host processor offers an integrated architecture for powerful, highly parallel performance that enables deeper insight, innovation, and impact for the most demanding HPC applications. Learn more at http://insidehpc.com/2017/01/cal-tech-upgrading-demo-cluster-intel-xeon-phi-x200-processor/

Afnan Abdul Rehman 2017-01-24T21:10:15Z

OpenPOWER Academic Group Carries 2016 Momentum to New Year

January 24, 2017

Academia has always been a leader in pushing the boundaries of science and technology, with some of the most brilliant minds in the world focused on how they can improve the tools at their disposal to solve some of the world’s most pressing challenges. That’s why, as the Leader of the OpenPOWER Academic Discussion Group, I believe working with academics in university and research centers to develop and adopt OpenPOWER technology is key to growing the ecosystem. The Academia Group is enabling several academicians to do research and development activities using Power CPUs and systems and this creates very strong ecosystem growth for OpenPOWER-based systems. 2016 was an amazing year for us, as we helped launch new partnerships at academic institutions like in A*CRC in Singapore, IIT Bombay in India, and more. We also assisted them in hosting OpenPOWER workshops where participants learned how OpenPOWER’s collaborative ecosystem is leading the way on a multitude of research areas. Armed with this knowledge, our members helped to spread the OpenPOWER gospel. Most recently, our members were at GTC India 2016 and SC16 to meet with fellow technology leaders and discuss the latest advances around OpenPOWER. After joining the OpenPOWER Foundation as an academic member in October 2016, the Universidad Nacional de Córdoba in Argentina sent professors Carlos Bederián and Nicolás Wolovick to SC16 in Salt Lake City to learn more about OpenPOWER. Read more at https://www.hpcwire.com/off-the-wire/openpower-academic-group-carries-2016-momentum-new-year/

Afnan Abdul Rehman 2017-01-24T21:08:52Z

Pages