Table of Contents
-
New to supercomputing? Click
here.
-
PLENARY:
Michael
M. Little
-
PLENARY:
Henry Neeman,
University of Oklahoma
-
PLENARY:
John Shalf,
Lawrence Berkeley National Laboratory/
National Energy Research Scientific
Computing Center (NERSC)
-
PLENARY:
Platinum Sponsor Speaker:
Stephen Wheat,
Intel Corp
-
Louisiana
School for
Math, Science, and the Arts
-
Rachana
Ananthakrishnan
University of Chicago/Argonne
National Laboratory
-
Daniel Andresen,
Kansas State University
-
Workalemahu M.
Berhanu,
University of Oklahoma
-
Shane
Corder,
Children's Mercy Hospital
-
Bob Crovella,
NVIDIA
-
Jason Goodman,
Cray Inc.
-
Carl Grant,
University of Oklahoma
-
Darren King,
Spectra Logic
-
George Louthan,
Oklahoma Innovation Institute
-
Greg Monaco,
Great Plains Network
-
Kevin Paschal,
CommScope
-
Jeff Pummill,
University of Arkansas
-
Fatih Yasar,
Hacettepe University/University of Oklahoma
Other speakers to be announced
PLENARY
SPEAKERS
Advanced Development Systems Engineer
Atmospheric
Science Data Center
(ASDC)
NASA
Langley Research Center
Topic:
"Big Data in a Supercomputing World"
Slides:
available after the Symposium
Abstract
Recent developments in supercomputing,
semantic technology and big data solutions
have created an opportunity
for the science community
that is unprecedented.
These technologies enable scientists
to correlate experimental or observational data
with model output,
greatly improving
the ability of scientists
to create new models of natural phenomena,
validate them
and
rapidly adjust their behavior
in response to details
learned from the observational data.
Visualization tools
have evolved to a new level of sophistication
and
the long awaited advent of
powerful data analytics tools
permit rapid evaluation of model output.
Large volumes of previously intractable data
can now be processed,
analyzed,
and understood.
Robust,
high-precision ontologies,
and new tools for managing and leveraging them,
provide a means for organizing,
annotating,
mediating,
and federating heterogeneous data resources.
It is time for
supercomputing centers and their advocates
to enter into a new partnership with
the domain scientists,
bringing their expertise and knowledge of
systems and computer science
more directly in support of the science.
It is time to start leveraging
cloud computing architectures,
new data storage techniques
and
other innovations
to bring supercomputing capabilities
to more effectively support
the new objectives of their customers
as well as new customer communities.
Biography
Mike Little works in
both the supercomputing and
earth remote sensing communities,
leveraging data exposure technologies
that can improve the
discoverability,
accessibility and understandability of
large data sets and model output.
He has been active in the management of
NASA's supercomputing program
for over 20 years and,
most recently,
in applying those assets to
the analysis of large, diverse or
rapidly changing data sets
for complex scientific investigations.
During a recent assignment to NASA HQ,
he led collaboration between
the NASA Science Mission Directorate (SMD)
and
the NASA Chief Information Officer (CIO)
to evaluate NASA's
Nebula Cloud Computing Capability
as a tool to do more science
within the same budget.
He brought together the
High End Computing Capability
(HECC)
at
NASA Ames Research Center
and the
NASA Center for Climate Simulation
(NCCS)
at the
Goddard Spaceflight Center
(GSFC)
as well the CIO's office at the
Jet Propulsion Laboratory
(JPL),
to complete the study
in preparation for an FY12 budget decision.
As part of this testing,
the
Amazon Web Services Cloud
and
Microsoft's Azure
system were also evaluated
which yielded a strategy for
NASA's use of this technology
to supplement
its in-house supercomputing capabilities.
During that time
he was also the NASA representative to the
Federal Big Data Senior Steering Group
(BDSSG),
formed by
OSTP
as well as OSTP's
Big Earth Data Initiative
(BEDI).
Now back at the
Atmospheric Science Data Center
in
NASA's Langley Research Center,
he is intent on
connecting remote sensing data
and climate model output
for initialization, intercalibration and
intercomparison purposes.
Mike has managed
computing technology programs at NASA
for almost 21 years,
first in the
Office of
Aeronautics, Exploration and Space Technologies
at NASA Headquarters
and then later at
NASA Langley Research Center,
including the
CERES Instrument
and the
Atmospheric Science Data Center.
He also worked at the multi-agency
NextGen Air Transportation
Joint Planning and Development Office
in the
Net-Centric Operations Division.
Prior to NASA,
Mike worked on the
1990 Census,
the
Air Force
Consolidated Space Operations Center
in Colorado Springs,
and various
US Navy
and
Marine Corps
system development programs.
Mike's first legitimate work experience
was as a nuclear trained submarine officer
boring holes in the ocean
after receiving a Bachelor of Science in
Physics
from the
University of Missouri
in 1972.
Over the past two years,
he has also been the NASA representative to the
Federal Big Data Senior Steering Group
(BDSSG),
formed by OSTP as well as OSTP's
Big Earth Data Initiative (BEDI).
Assistant Vice President/Research
Strategy Advisor
Information
Technology
Director
OU
Supercomputing Center for Education
& Research (OSCER)
Information
Technology
Associate Professor
College
of Engineering
Adjunct Faculty
School
of Computer Science
University
of Oklahoma
Topic:
"OSCER State of the Center Address"
Slides:
PowerPoint
PDF
Talk Abstract
The
OU
Supercomputing Center for
Education & Research
(OSCER)
celebrates its 11th anniversary
on August 31 2013.
In this report,
we examine
what OSCER is,
what OSCER does,
what OSCER has accomplished
in its 11 years,
and where OSCER is going.
Biography
Dr.
Henry Neeman
is the
Director of the
OU
Supercomputing Center for Education &
Research,
Assistant Vice President
Information Techology
–
Research Strategy Advisor,
Associate Professor in the
College
of Engineering
and
Adjunct Faculty in the
School
of Computer Science
at the
University of
Oklahoma.
He received his BS in computer science
and his BA in statistics
with a minor in mathematics
from the
State
University of New York at Buffalo
in 1987,
his MS in CS from the
University of
Illinois at Urbana-Champaign
in 1990
and his PhD in CS from UIUC in 1996.
Prior to coming to OU,
Dr. Neeman was a postdoctoral research
associate at the
National
Center for Supercomputing Applications
at UIUC,
and before that served as
a graduate research assistant
both at NCSA
and at the
Center for
Supercomputing Research &
Development.
In addition to his own teaching and research,
Dr. Neeman collaborates with
dozens of research groups,
applying High Performance Computing techniques
in fields such as
numerical weather prediction,
bioinformatics and genomics,
data mining,
high energy physics,
astronomy,
nanotechnology,
petroleum reservoir management,
river basin modeling
and engineering optimization.
He serves as an ad hoc advisor
to student researchers
in many of these fields.
Dr. Neeman's research interests include
high performance computing,
scientific computing,
parallel and distributed computing
and
computer science education.
Department Head for Computer Science
Lawrence
Berkeley National Laboratory
Chief Technology Officer
National
Energy Research Scientific Computing
Center
Topic:
"Energy Efficiency and its Impact on
Requirements for
Future Programming Environments"
Slides:
PowerPoint
PDF
Talk Abstract
The current MPI+Fortran ecosystem has sustained
HPC application software development
for the past decade,
but was architected for
coarse-grained concurrency
largely dominated by
bulk-synchronous algorithms.
Future hardware constraints
and growth in explicit on-chip parallelism
will likely require
a mass migration to
new algorithms and software architecture
that is as broad and disruptive as
the migration
from vector to parallel computing systems
that occurred 15 years go.
The challenge is to efficiently express
massive parallelism
and hierarchical data locality
without subjecting the programmer to
overwhelming complexity.
The talk will cover
the definition of abstract machine models
and quantitative examples of
how changes in hardware
are breaking
our existing abstract machine models.
We will examine potential approaches
that range from
revolutionary
asynchronous and dataflow
models of computation
to
evolutionary extensions
to existing messing APIs and OpenMP directives.
Biography
John Shalf is Chief Technology Officer for the
National
Energy Research Scientific Computing
Center
(NERSC)
and also
Department Head for
Computer Science and Data Sciences
at
Lawrence
Berkeley National Laboratory.
Shalf is a co-author of over 60 publications
in the field of
parallel computing software and HPC technology,
including three best papers and
the widely cited report
"The Landscape of Parallel Computing Research:
A View from Berkeley"
(with David Patterson and others),
as well as
"ExaScale Software Study:
Software Challenges in Extreme Scale Systems,"
which sets the
Defense
Advanced Research Project Agency's
(DARPA's)
information technology research
investment strategy for the next decade.
He co-led the Berkeley Lab/NERSC team
that won a
2002
R&D 100 Award
for the
RAGE robot.
Before joining Berkeley Lab in 2000,
he was a research programmer at the
National
Center for Supercomputing Applications
at the
University
of Illinois
and a visiting scientist at the
Max-Planck-Institut
fuer
Gravitationphysick/Albert Einstein Institut
in Potsdam, Germany,
where he co-developed the
Cactus
code framework for computational astrophysics.
General Manager, High Performance Computing
Intel
Topic:
"HPC@Intel: On Driving Industrial Innovation"
Slides:
PDF
Talk Abstract
The value of HPC to
the US National Labs and leading universities
and to their world-wide counterparts
continues to grow
and is widely recognized as
a national differentiating asset.
The value of HPC is also realized
in the private sector at large companies.
However,
the broader value of HPC remains to be realized
through its adoption by
the millions of Small Medium Manufacturers
around the world.
The subject as to why
the democratization of HPC hasn't progressed
is increasingly in
the forefront of discussions around
national competitiveness and economic growth.
Representative
Dan Lipinski's
recent comments
are spot on to this subject:
"... [A] big part of our future competitiveness
depends on our ability to move
new and emerging technologies
out of the lab
and into the mainstream of commerce.
... I believe the potential for job creation
emanating from
research being performed at these institutions
is immense."
In this talk,
I will address the value of HPC to
the manufacturing community,
mapping Intel's efforts and products to
the objective,
and the recent accomplishments that underscore
the value to be had through
the democratization of HPC.
Biography
Dr. Stephen Wheat is
the General Manager for
High Performance Computing
at
Intel.
He is responsible for driving
the development of Intel's HPC strategy
and
the pursuit of that strategy through
platform architecture,
eco-system development
and
collaborations.
While in this role,
Dr. Wheat has influenced
the deployment of several Top10 systems
and
many more
Top500
HPC systems.
Dr. Wheat has a wide breadth of experience
that gives him
a unique perspective in understanding
large scale HPC deployments.
He was
the Advanced Development manager
for the Storage Components Division,
the manager of
the RAID Products Development group,
the manager of
the Workstation Products Group
software and validation groups,
and manager of
the Supercomputing Systems Division (SSD)
operating systems software group.
At SSD,
he was
a Product Line Architect
and was
the systems software architect for
the
ASCI
Red
system.
Before joining Intel in 1995,
Dr. Wheat worked at
Sandia
National Laboratories,
performing leading research in
distributed systems software,
where he created and led the
SUNMOS
and
PUMA/Cougar
programs.
Dr. Wheat is a 1994
Gordon
Bell Prize
winner
and
has been awarded Intel's prestigious
Achievement Award.
He has a patent in
Dynamic Load Balancing in HPC systems.
He has also twice been honored as one of
HPCwire's
People to Watch,
in
2006
and
2013.
Dr. Wheat holds a Ph.D. in
Computer Science
and has several publications on
the subjects of
load balancing,
inter-process communication,
and
parallel I/O in large-scale HPC systems.
Outside of Intel,
he is a commercial multi-engine pilot
and
a FAA certified multi-engine, instrument
flight instructor.
BREAKOUT
SPEAKERS
Louisiana
School for Math, Science, and the Arts
Topic:
"HPC in High School"
Slides:
PDF
Talk Abstract
The
Louisiana
School for Math, Science, and the Arts
has an active
High Performance Computing (HPC)
program.
Students can get
Louisiana
State University
HPC,
LONI
and
XSEDE
accounts,
and they can incorporate HPC in independent
projects for graduation with distinction.
Two students presented a poster at the
XSEDE'13
conference.
Brad Burkman
will talk about organizing and funding
such opportunities for students.
Katherine Prutz and Annalise Labatut
will discuss student initiated HPC work,
focusing on their current project:
creating a sound wave-based alarm system
that creates the 2D image of a face
using 3D data captured from proximity sensors.
The two students wanted to create
a project that would include
data they could gather on their own.
Chris Myles will talk on
learning cluster administration with the
LittleFe.
Biography
Brad Burkman
studied English at
Wheaton College
and
Mathematics at the
State
University of New York at Buffalo,
and has taught math at the
Louisiana School for ten years.
Katherine Prutz,
Annalise Labatut
and
Chris Myles are seniors at the
Louisiana School.
Senior Engagement Manager/Solutions Architect
Computation
Institute
University
of Chicago/Argonne
National Laboratory
Topic:
"Research Data Management-as-a-Service
with Globus Online"
Slides:
PowerPoint
PDF
Abstract
As science becomes
more computation- and data-intensive,
there is an increasing need for researchers
to move and share data
across institutional boundaries.
Managing massive volumes of data
throughout their lifecycle
is rapidly becoming
an inhibitor to
research progress,
due in part to
the complex and costly
Information Technology (IT)
infrastructure required
—
infrastructure that is typically
out of reach for
the hundreds of thousands of
small and medium labs
that conduct the bulk of scientific research.
Globus
Online
is a powerful system
that aims to provide
easy-to-use services and tools for
research data management
—
as simple as
the cloud-hosted Netflix for streaming movies,
or Gmail for e-mail
—
and make advanced IT capabilities
available to any researcher with
access to a web browser.
Globus Online provides
Software-as-a-Service (SaaS)
for research data management,
including data movement,
storage,
sharing,
and publication.
We will describe
how researchers can deal with
data management challenges
in a simple and robust manner.
Globus Online makes
large-scale data transfer and synchronization
easy
by providing a reliable,
secure,
and highly-monitored environment
with powerful and intuitive interfaces.
Globus also provides
federated identity and group management
capabilities
for integrating Globus services
into campus systems,
research portals,
and
scientific workflows.
New functionality includes
data sharing,
simplifying collaborations
within labs or around the world.
Tools specifically built for
IT administrators on campuses
and
computing facilities
give additional features,
controls,
and
visibility into users'
needs and usage patterns.
We will present use cases
that illustrate how Globus Online is used
by campuses
(e.g.,
University
of Michigan),
supercomputing centers
(e.g.,
Blue
Waters,
NERSC),
and
national cyberinfrastructure providers
(e.g.,
XSEDE)
to facilitate secure,
high-performance data movement
among local computers and HPC resources.
We will also outline
the simple steps required
to create a Globus Online endpoint
and
to make the service available
to all facility users
without specialized hardware,
software
or
IT expertise.
Biography
Rachana Ananthkrishnan
is a
Senior Engagement Manager
and Solutions Architect
at the
Computation
Institute,
and has a Joint Staff Appointment at
Argonne
National Laboratory.
Rachana is a member of the
Globus
Online
User Services team,
where she works with
researchers from various domains,
designing solutions for
secure research data management.
She has worked on
security and data management solutions
on various projects including
Earth
System Grid,
Biomedical
Informatics Research Network
(BIRN)
and
XSEDE.
Prior to that she worked on the
Globus
Toolkit
engineering team,
leading the efforts in core web
services and security technologies.
Rachana received her MS in
Computer
Science
at
Indiana
University,
Bloomington.
Associate Professor
Department of
Computing & Information Sciences
Kansas State
University
Director
Institute for Computational Research
Topic:
"Championing Users --
A Guide to Enabling Campus Researchers"
(with
Jeff Pummill)
Slides:
PDF
Abstract
As the need for
computational resources in scientific research
continues its explosive growth on
academic campuses,
the question becomes,
how do we assist users
to best enable them
to take advantage of
the local campus and
national infrastructures
thus truly enabling their research?
The purpose of this session is
to explore both issues and opportunities
as a staff cyberinfrastructure enabler.
Examples of questions may include:
-
How important is it to have
resources locally?
-
What resources are needed locally?
-
What resources are available nationally?
-
What is an
XSEDE
Campus
Champion,
and how do you become one?
The talk will include a basic overview of
XSEDE,
as well as information on
the allocation process,
resource selection,
and
usage models.
In addition,
there are opportunities for
researchers,
educators,
and students
to engage and benefit.
This session will provide
an opportunity to get together with
other researchers and HPC center staff
to discuss success stories
and areas needing improvement,
or simply to ask questions about best practices
with a
group of peers.
Bring your comments,
critiques and questions,
and expect a lively discussion.
Biography
Daniel
Andresen, Ph.D.
is an associate professor of
Computing
& Information Sciences
at
Kansas
State University
and Director of the
Institute for Computational Research.
His research includes
embedded and distributed computing,
biomedical systems,
and high performance scientific computing.
Dr. Andresen coordinates the activities of
the K-State research computing cluster,
Beocat,
and advises the
local
chapter
of the
Association
for Computing Machinery
(ACM).
He is a
National
Science Foundation
CAREER
award winner,
and has been granted research funding from
the NSF,
the
Defense
Advanced Research Projects Agency
(DARPA),
and industry.
He is a member of
the
Association
for Computing Machinery,
the
IEEE
Computer Society,
the
Electronic
Frontier Foundation,
and
the
American
Society for Engineering Education.
Postdoctoral Research Associate
Department
of Chemistry & Biochemistry
University
of Oklahoma
Topic:
"In Silico Cross Seeding of Aβ and Amylin Fibril-like Oligomers"
Slides:
available after the Symposium
Talk Abstract
Recent epidemiological data have shown
that patients suffering from
Type 2 Diabetes Mellitus
have an increased risk
to develop Alzheimer's disease and vice versa.
A possible explanation is
the cross-sequence interaction between
Aβ and amylin.
Because the resulting amyloid oligomers
are difficult to probe in experiments,
we investigate
stability and conformational changes of
Aβ-amylin heteroassemblies
through molecular dynamics simulations.
We find that Aβ is a good template for
the growth of amylin and vice versa.
We see water molecules permeate
the β-strand-turn-β-strand motif pore
of the oligomers,
supporting a commonly accepted mechanism for
toxicity of β-rich amyloid oligomers.
Aiming for a better understanding of
the physical mechanisms of
cross-seeding and cell toxicity of
amylin and Aβ aggregates,
our simulations also allow us
to identify targets for
the rational design of inhibitors against
toxic fibril-like oligomers of
Aβ and amylin oligomers.
Biography
Workalemahu M. Berhanu
graduated
in 2011
from the
University
of Central Florida
with a PhD
in Chemistry.
Since January 2012,
he has been working in
Prof.
Ulrich Hansmann's
group in the
Department
of Chemistry & Biochemistry
at the
University
of Oklahoma
as postdoctoral research associate.
His research interest is
bimolecular simulation and
computer aided drug design,
focused on the
interaction of drug molecules
with their receptors,
modeling of
protein aggregation, and their inhibition.
HPC Systems Administrator
Center
for Pediatric Genomic Medicine
Children's
Mercy Hospital
Topic:
"HPC and Genomics at Children's Mercy"
Slides:
available after the Symposium
Abstract
HPC has been critical for
The
Center
for Pediatric Genomic Medicine
(CPGM)
at
Children's
Mercy Hospital
to make great progress in
the search for rare childhood diseases.
With advanced next-gen sequencing technologies,
a Linux compute cluster,
and an
Isilon
storage cluster,
the center has been able
to make incredible discoveries
and help drive genomic testing
in the pediatric arena.
By releasing their clinical test,
STAT-Seq,
and utilizing homegrown analysis tools,
CPGM hopes to change
the way medicine is practiced.
Biography
Shane Corder
is the HPC Systems Administrator for the
Center for Pediatric Genomic Medicine
(CPGM)
at
Children's Mercy Hospital
in Kansas City, Missouri.
He has been in his current position
for 2 years.
He is responsible for
the administration,
support,
and planning of
the center's compute infrastructure.
Previously,
Shane was the Linux Cluster Engineer at
Advanced
Clustering Inc.
in Kansas City KS for nearly 7 years.
In addition to his core responsibility of
supporting
The Center for Pediatric Genomic Medicine's
clinical research goals,
Shane has also been involved with
computational support and administration for
other departmental research programs at
the hospital.
His interests include
HPC,
Genomics,
Meteorology,
system performance tuning,
and system automation.
Solutions Architect
Tesla Sales
NVIDIA
Topic:
"GPU Computing Trends"
Slides:
available after the Symposium
Abstract
Graphics Processing Unit (GPU)
computing is maturing into
a mainstream application
acceleration technology.
We will survey of
the current trends and
state of the art in
the GPU computing space
as it relates to HPC,
including
key application areas,
methodologies,
and
current hardware offerings.
Biography
Bob Crovella leads a technical team at
NVIDIA
that is responsible for
supporting the sales of our
GPU
Computing products
through our
Original Equipment Manufacturer (OEM)
partners and systems.
Bob joined NVIDIA in 1998.
Previous to his current role at NVIDIA,
he led a technical team
that was responsible for
the design-in support of our GPU products
into OEM systems,
working directly with
the OEM engineering and technical staffs
responsible for their respective products.
Prior to joining NVIDIA,
Bob held various engineering positions at
Chromatic Research,
Honeywell,
Cincinnati
Milacron,
and
Eastman
Kodak.
Bob holds degrees from
Rensselaer
Polytechnic Institute
(M. Eng.,
Communications and Signal Processing)
and
The
State University of NY at Buffalo
(BSEE).
He resides with his family
in the Dallas TX area.
Senior Product Marketing Manager
Storage and Data Management
Cray, Inc.
Topic:
"Tiered Storage for Big Data"
(with
Darren King)
Slides:
available after the Symposium
Talk Abstract
New data management strategies and solutions
are needed to deal with
the onslaught of data.
Cray
and
Spectra
Logic
will provide a glimpse into these challenges
and preview an up-and-coming solution
for Tiered Storage
as related to Big Data and HPC.
Cray and Spectra have partnered to deliver
a complete,
adaptive,
familiar,
and trusted solution for Tiered Storage.
The goal of the breakout session is
to drive discussion
around the topic
and hear from customers and users
about their challenges and goals.
Biography
Jason Goodman
works for
Cray's
storage and data management division.
He has over 15 years' experience in high-tech,
with expertise in data storage software,
data management,
networking,
and virtualization.
He's owned his own business and
worked for companies such as
Microsoft,
PolyServe
(acquired by
Hewlett-Packard),
Isilon
Systems
(acquired by
EMC),
Aspera,
and
GlassHouse
Technologies,
among others.
At Isilon,
Jason led
storage software product
management and commercialization strategies
for scale-out
Network Attached Storage
(NAS).
He helped move the company into position to
compete with EMC and
Network
Appliance
in the enterprise,
through capabilities such as archiving,
identity management,
iSCSI,
Storage
Resource Management
(SRM),
virtualization,
and
Wide
Area File Services
(WAFS),
among others.
On a personal note, Jason enjoys training dogs,
riding dirt bikes,
and playing lacrosse,
travelling to tournaments when possible.
Associate Dean, Knowledge Services &
Chief Technology Officer
University
of Oklahoma Libraries
University
of Oklahoma
Topic:
"Learning to SHARE"
Slides:
available after the Symposium
Abstract
During the past six months,
the
University
of Oklahoma Libraries
have made major strides forward
by putting into place
the stepping stones to
a dynamic digital future.
A new shared
(with
Oklahoma
State University)
digital institutional repository
(SHAREOK.ORG)
was installed,
running on OU's
Shared
Services
infrastructure.
OU and OSU are also sharing
a joint installation of the
Open
Journal System
software,
in order to produce
Open Access publications.
Ultimately,
these OA publications produced (and others)
will reside in the new shared repository.
A brand new digitization lab
was also created at OU Libraries,
which will digitize
images and other materials from
the special collections of the Libraries,
as well as other library resources.
The repository will also be
a point for the deposit of
publicly funded research data,
metadata and datasets
developed by researchers
both from within OU/OSU and
from other institutions and organizations.
This new digital institutional repository
is planned to scale
so that other
higher education and non-profit institutions
from across the state
that wish to join
can do so using the shared repository model.
In the context of the
OneOklahoma Research Data Stewardship
Initiative
and as part of
the promotion of and education about
this new digital resource and
new cooperative model in the state,
OU has been running symposia on
Research Data and Open Access
and has plans to hold more in the next year.
Finally,
on August 29 2013,
the
Association
of Research Libraries
(ARL),
the
Association
of American Universities
(AAU),
and the
Association
of Public and Land-grant Universities
(APLU)
announced the formation of
a joint steering group
to advance a proposed network of
digital repositories at
universities,
libraries,
and
other research institutions
across the US
that will provide
long-term public access to
federally funded research articles and data.
This repository network,
the
SHared
Access Research Ecosystem
(SHARE),
is being developed
as one response to
a
White
House directive
instructing federal funding agencies
to make the results of research they fund
available to the public.
The
SHARE
Steering Group
will be chaired by
Rick
Luce,
Associate Vice President for Research
and
Dean of University Libraries
at OU,
and
Tyler
Walters,
Dean of
University
Libraries
at
Virginia Tech.
Biography
Carl Grant
is the
Chief Technology Officer
and
Associate University Librarian
for Knowledge Services
at the
University
of Oklahoma Libraries.
Prior to that,
he was the
Chief Librarian
and
President
of
Ex
Libris North America,
a leading
academic library automation company.
Mr. Grant has also held
senior executive positions
in a number of other
library-automation companies.
His commitment to libraries,
librarianship,
and information industry standards
is well known
via his participation in the
American
Library Association
(ALA)
and
Association
of College & Research Libraries,
Library
Information Technology Association;
and for his work on the board of the
National
Information Standards Organization
(NISO),
where he has held offices as board member,
treasurer,
and
chair.
In recognition of his contribution to
the library industry,
Library
Journal
has named Mr. Grant an industry notable.
Mr. Grant holds
a master's degree in
Library
Science
from the
University
of Missouri at Columbia.
Enterprise Sales Representative
Spectra
Logic
Topic:
"Tiered Storage for Big Data:
The Tape Component"
(with
Jason Goodman)
Slides:
available after the Symposium
Talk Abstract:
People have spoken of tape as
a dying market for years,
but if you look at
the market for archiving data
—
specifically research data
—
tape is projected to grow at
a 45% Compound Annual Growth Rate (CAGR)
through the year 2015.
Darren will discuss
tape's increasing role in the HPC vertical,
addressing customer challenges
and the advantages of
utilizing new tape technology
as part of
a tiered storage and archive strategy.
Biography
Darren King
manages the Central Region for
Spectra
Logic Corporation,
a data storage and backup company
based in Boulder, CO.
He joined SpectraLogic in 2004
as a Business Developer
and has spent the last 9 years
helping customers across the country
solve their problems related to
data storage,
backup
and
archiving.
Recently,
Darren has focused on
projects with HPC customers
such as the
Minnesota
Supercomputing Institute,
at the
University
of Minnesota,
the
Center
for Remote Sensing of Ice Sheets
(CReSIS)
at the
University
of Kansas,
and with the
Center
for Pediatric Genomic Medicine
at
Children's
Mercy Hospital.
Computer Scientist & Director
Tandy
Supercomputing Center
Oklahoma
Innovation Institute
Topic:
"The Tandy Supercomputing Center"
Slides:
PDF
Talk Abstract
The
Tandy
Supercomputing Center
is an initiative of the
Oklahoma
Innovation Institute
(OII),
a 501(c)(3) not for profit corporation
based in Tulsa OK.
TSC's mission is
to provide cyberinfrastructure (CI) resources,
including high performance computing,
at low cost to Tulsa area researchers;
provide CI support to
local emerging growth companies;
and to expand
the technical capacity of the area
by facilitating education at all levels,
from vocational
to university
to continuing education.
This presentation will describe
the mission and structure of OII
and the Tandy Supercomputing
Center,
and give an overview of TSC's new system:
from design
to site renovations
to deployment and current operations.
Biography
George Louthan
serves as the Director of the
Tandy
Supercomputing Center,
an initiative of the
Oklahoma
Innovation Institute
in Tulsa OK.
He joined the institute
as a volunteer computer scientist
helping to develop the supercomputing center
and procure its systems in 2011,
before becoming OII's
first full time employee in late 2012.
He holds a MS in Computer Science and
undergraduate degrees in
Computer Science and Mathematics from the
University
of Tulsa.
Before moving to high performance computing,
his background included work in
information security,
research software development
and
informatics.
Director for Research &
Cyberinfrastructure Initiatives
Great
Plains Network
Topic:
"About the Great Plains Network"
Slides:
PDF
Abstract
The
Great
Plains Network
was founded by researchers and for researchers
to advance regional capabilities
with respect to
advanced networking and
access to national cyberinfrastructure.
With over 20 leading universities
in seven states
as founding members,
the
Great Plains Network Consortium
continues to lead in support of
research collaboration,
education
and
advanced networking
for member
institutions.
Members trust and rely on
the expertise,
support,
and
collaboration of one another.
GPN staff actively seek out and help members
to pool their skills and knowledge
across universities and across disciplines.
In a host of technical and research areas,
GPN participants
are recognized leaders in their fields.
By partnering with one another,
their mutual efforts have attained
national and international recognition.
Biography
Dr.
Greg Monaco
has held several positions with the
Great
Plains Network
since August 2000, when he joined GPN.
He began as Research Collaboration Coordinator,
and then was promoted to
Director for Research and Education,
followed by Executive Director
for several years.
He is currently the
Director for Research and
Cyberinfrastructure Initiatives.
Technical Manager
Enterprise Solutions
CommScope
Topic:
"Scalable Physical Layer
Infrastructure Solutions:
Meeting the Demands of Big Data
and
High Performance Computing
Environments"
Slides:
available after the Symposium
Talk Abstract:
Technology continues to evolve,
providing broad access to
greater computational power
and vastly larger quantities of data.
In the past,
multiple processing appliances
could be networked together
to provide the computational capacity
to tackle difficult problems.
More recently,
multi-core processors have been
a key driver in expansion of
the processing
power of single appliances.
However,
high performance computing can be gated by
inadequate access to
the data required for use,
or by the ability
to store the output.
As such,
storage devices have also migrated
from older technologies,
such as tape storage,
through various disk drive types,
and
into solid state memory,
improving the speed of access to
the data contained therein.
However,
there is another potential roadblock
in the path to realizing more capable
high performance computing networks.
The connectivity of these appliances
can become a limiting factor
if it cannot support
the high speed,
high volume traffic
required between
processing appliances and storage devices.
The purpose of this session
is to examine the state of the art of
the physical layer infrastructure,
and how we must plan to meet
the needs of tomorrow
in modern Data Centers and HPC environments.
Biography
Kevin Paschal is responsible for providing
technical direction,
training,
and
support to
the Enterprise Solutions sales teams,
Business Partners,
and
end users for the
SYSTIMAX SCS
and
Uniprise
families of copper and fiber cabling systems.
Paschal received a Bachelor of Science degree
in
Mechanical
Engineering
from
North
Carolina State University.
He has more than 20 years of
experience with
data and telecommunications solutions,
including
proficiencies in
product management,
research & development,
engineering,
and
manufacturing.
Paschal holds 5 patents
relating to fiber optic cable
design.
Manager for Cyberinfrastructure Enablement
Arkansas High
Performance Computing Center
University
of Arkansas
Talk Topic:
"Championing Users --
A Guide to Enabling Campus Researchers"
(with
Dan Andresen)
Talk Slides:
available after the Symposium
Abstract
As the need for
computational resources in scientific research
continues its explosive growth on
academic campuses,
the question becomes,
how do we assist users
to best enable them
to take advantage of
the local campus and
national infrastructures
thus truly enabling their research?
The purpose of this session is
to explore both issues and opportunities
as a staff cyberinfrastructure enabler.
Examples of questions may include:
-
How important is it to have
resources locally?
-
What resources are needed locally?
-
What resources are available nationally?
-
What is an
XSEDE
Campus
Champion,
and how do you become one?
The talk will include a basic overview of
XSEDE,
as well as information on
the allocation process,
resource selection,
and
usage models.
In addition,
there are opportunities for
researchers,
educators,
and students
to engage and benefit.
This session will provide
an opportunity to get together with
other researchers and HPC center staff
to discuss success stories
and areas needing improvement,
or simply to ask questions about best practices
with a
group of peers.
Bring your comments,
critiques and questions,
and expect a lively discussion.
Biography
Jeff
Pummill
is the
Manager for Cyberinfrastructure Enablement
at the
University
of Arkansas.
He has supported
the high performance computing activities at
the University of Arkansas
since 2005,
serving first as
Senior Linux Cluster Administrator
before his
current role,
and has more than a decade of experience in
managing
high performance computing resources.
Jeff is also the
XSEDE
Campus
Champion
for the
University of Arkansas,
and is a very active
contributor at the national level on the
Campus Champion Leadership Team.
Prof. Dr. (HU)
Department
of Physics Engineering
Hacettepe
University
Department
of Chemistry & Biochemistry
University
of Oklahoma
Topic:
"Self-assembly of the Tetrameric Miniprotein"
Slides:
available after the Symposium
Talk Abstract
We have systematically studied
the heterotetrameric miniprotein BBAThet1,
which consists of 84 residues in total
and is derived by computer-aided design
based on the ββα motif family.
Modeling of this system
can provide important insight
into
studying folding mechanisms
and
protein-protein interactions
as well as association.
For this purpose,
we have performed
multiplexed replica exchange
molecular dynamic simulations
with coarse-grain UNRES force field.
All simulations were done on
the Boomer cluster at the
OU
Supercomputing Center for
Education & Research
(OSCER).
Our observations show that
the 4 individual chains
associated to a discrete tetramer.
Biography
Fatih Yasar
completed his PhD
about the phase transitions of complex systems
in 2000.
Since 2008,
he has been a full professor in the
Department
of Physics Engineering
at
Hacettepe
University
in Ankara, Turkey.
His research interests lie in the area of
computational physics,
from neural networks to protein systems.
Currently,
he works as
a long term visitor in
Ulrich
Hansmann's group
in the
Department
of Chemistry & Biochemistry
at the
University
of Oklahoma.
OTHER
BREAKOUT SPEAKERS TO BE ANNOUNCED