News

GPIUTMD IUT

Graphic Processor Units for Many-particle Dynamics

.

Atlanta, GA, Jun 1, 2011 - AccelerEyes today released version 1.0 of libJacket enabling GPU programmers to achieve better GPU performance with less programming hassle. It builds upon the big success of Jacket for MATLAB®, bringing that popular runtime system and vast GPU function library to new programming languages.

The library is available for C, C++, Fortran, and Python. It is designed for use with any CUDA application, in the same way that the native CUBLAS and CUFFT libraries are leveraged. It can also be leveraged to avoid writing any tedious kernel code, for productivity and performance gains.

 

LIBJACKET scales across all CUDA-capable GPUs, from laptops to workstations to high-end supercomputers.

 

It enables high-level, matrix style code to achieve low-level, down-to-metal speeds so that developers can quickly build high-performance applications. This high-level interface makes it easy to experiment and change various parts of algorithms without having to recode and tune from scratch. LIBJACKET is the largest and fastest set of GPU algorithms available in one package.

 

This new v1.0 release also incorporates the popular GFOR loop for running FOR loop iterations simultaneously on the GPU.

 

Download a free 15-day trial from the AccelerEyes website.

 

In conjunction with this launch, AccelerEyes is offering two important promotions:

 

  • LIBJACKET with Every Tesla Purchase - Free libJacket subscriptions are made available with every new Tesla purchase. Redeem now.

  • Jacket Customers, Get libJacket 50% Off - All customers with Jacket licenses under maintenance are eligible to get libJacket 50% off. Redeem now.

 

Enterprise customers have already started adopting libJacket. Northrop Grumman engineer, Xiang Ma, says of libJacket: "I spent a lot of time searching for efficient ways to utilize GPU power without spending too much time on low-level detail. LIBJACKET is the best solution currently available in the market for our goal of significantly improving performance with the least amount of time and effort. I also really like AccelerEyes. super customer support for helping us with great response time."

 

LIBJACKET is an important milestone for AccelerEyes," says John Melonakos, CEO at AccelerEyes. "It lets people outside of MATLAB® enjoy the benefits of Jacket technology . the great performance and high-level programming style as well as the stability, support, and rapid innovation of our dedicated commercial software development."

 

Visit the company website to access the new software today!

 

Pricing and availability

 

LIBJACKET 1.0 is now available for download on the AccelerEyes website. Pricing for a libJacket development license is $999US ($350US academic). Development Licenses are sold as 1-year subscriptions that include updates. Deployment Licenses are also available. Request a quote by email to This email address is being protected from spambots. You need JavaScript enabled to view it..

 

About AccelerEyes
AccelerEyes launched in 2007 to commercialize Jacket, the first software platform to deliver productivity in GPU computing. With advanced language processing and runtime technology to transform CPU applications to high performance GPU codes, Jacket extends from desktop workstation performance to also fully leverage GPU clusters. Based in Atlanta, GA., the privately held company markets Jacket for a range of defense, intelligence, biomedical, financial, research, and academic applications. Additional information is available at http://www.accelereyes.com/.

 

Random Images

lj_liqiud.jpg

Login Form

Visitors Counter

© 2009-2024 by GPIUTMD

We have 7 guests and no members online

Latest News

$150,000 AWARD FOR RESEARCHERS, UNIVERSITIES WORLDWIDE

NVIDIA is calling on global researchers to submit their innovations for the NVIDIA Global Impact Award - an annual grant of $150,000 for groundbreaking work that addresses the world's most important social and humanitarian problems. 

Read more ...

Unified Memory in CUDA 6

With CUDA 6, we’re introducing one of the most dramatic programming model improvements in the history of the CUDA platform, Unified Memory. In a typical PC or cluster node today, the memories of the CPU and GPU are physically distinct and separated by the PCI-Express bus. Before CUDA 6, that is exactly how the programmer has to view things. Data that is shared between the CPU and GPU must be allocated in both memories, and explicitly copied between them by the program. This adds a lot of complexity to CUDA programs.

Read more ...

My Apple Style Countdown

© 2009-2015 by GPIUTMD

Word Cloud

ارتقاء امنیت وب با وف بومی