J. Taylor Childers
  • Home
  • Resume
  • Publications
Picture

J. Taylor Childers

Argonne National Laboratory
Leadership Computing Facility
taylor@jtchilders.com
Building 240, 4142

Argonne National Laboratory (2018 - present)

Computer Scientist

Picture
  • Working with researchers from science domains to scale simulation codes using MPI on ALCF resources.
  • Applying scalable machine learning techniques on supercomputers.
  • Developing 3D ‘images’ of the O(100) million pixels of the ATLAS detector in collaboration with ATLAS collaborators at Argonne and designing custom YOLO (You Only Look Once) models that will be trained in a distributed way across many compute nodes on the Theta supercomputer.
  • Scaling ML training in Tensorflow with data parallel methods using MPI on the Theta supercomputer.
  • Implemented a machine learning (ML) model in Keras composed of convolutional neural networks (CNNs) trained using Tensorflow to classify energy deposits in the ATLAS detector by one of four particles. The images used are 2D with 2 channels encoding radial distance from the center of the detector. e model is based on the cifar-10 model. Also studying the use of auto-encoding Neural Networks (NNs) to produce a reduced data representation for the ATLAS detector. I supervised a summer student through this project who presented these studies in meetings of LHC researchers.

Argonne National Laboratory (2013 - 2017)

Assistant Physicist

Picture
  • Developing a Python-based Django application to manage job workflows on HPC systems including the flow of TBs of job data, web monitoring, and management interface [1,4,7].
  • Developing parallel MPI-enabled workflows and generally refactoring HEP applications, originally designed to run serially, to be more efficient on supercomputers. For example, improved the event production versus wall-time of the Alpgen event generator on Mira by more than 10x and scaled this serial code to all 786K cores with two processes per core using MPI and RAM-Disks [5].
  • Delivering over 250M core-hours (>250B simulated proton collisions) to ATLAS using Mira, eta, Edison, and Cori as the PI of an ALCC award over the past three years.
  • Profiling simulation codes using tools, ranging from GDB to TAU, to identify performance and I/O bottlenecks. As an example, used TAU to identify inefficiencies in the Sherpa physics event generator reducing the initialization from an hour to minutes on Mira [2].
  • Analyzing physics data, scraping log files, and transforming data formats to validate simulation accuracy, consistency, and performance.
  • Demonstrating communication skills having regularly worked with teams of three to ten people from multiple institutes on the above projects, presented progress and plans to international collaborators in meetings of 15-50 people, and presented research contributions to leading conference audiences of 50-200 people. 

CERN (2011-2013)

Fellow

Picture
  • Measuring the differential top pair production cross-section, a fundamental measurement of Standard Model physics, using TBs of data with a team of ten ATLAS collaborators. is analysis used Singular Value Decomposition to ‘unfold’ detector effects from data distributions in order to compare results with theoretical predictions. e Best Linear Unbiased Estimator (BLUE or Gauss–Markov theorem) method was used to combine distributions with correlated systematic uncertainties [8].
  • ​Developing parallel code in Python using the multiprocessing module to scale analysis on single mutli-core nodes (up to 16 cores).
  • Leading the multinational detector operations team (~100 experts) as the Run Manager for the experiment who mitigates problems that arise at all hours to ensure data taking efficiency remains high 

Universität Heidelberg (2007-2011)

Post-doctoral Researcher

Picture
  • Leading the monitoring software team for a detector subsystem, responsible for analyzing data and organizing plots used during data taking to detect and assess the source of problems. Contributing analysis and calibration algorithms to the experiment software framework in Python and C++ that is continuously under development by a large fraction of the three thousand person collaboration.
  • Developing in Verliog for FPGAs on the trigger system.
  • Analyzing nanosecond-level timing to extract calibration settings [9,11,12].
  • Primary editor for the electron/photon trigger performance paper [10].
  • Selecting conference attendees and reviewing submitted conference proceedings as an elected member of the Speakers Committee.
  • Mentoring three graduate students through experiment related projects.
  • Training collaborators in classes of 20-30 people on detector operations and monitoring. 

University of Minnesota (2002-2007)

Graduate Research Assistant

Picture
  • Graduated with PhD in Physics
  • Studing the time-dependent cosmic ray propagation model by measuring the Boron to Carbon ratio using data from the CREAM balloon experiment [13].
  • Designing, constructing, and developing software (in C++) for the on-board data acquisition and command interface computer.
  • Calibration analysis for the detector and performing launch preparations in Antarctica [14].
  • Designing and implementing the data acquisition software for the CREST balloon experiment using pthreads, TCP/IP communication to the NASA command PC, and digital communication via the serial port for detector control.

University of  Kentucky (1998-2002)

Undergraduate Research Assistant

Picture
  • Graduated with a Bachelor of Science in Physics, minor in Mathematics
  • Helped cable the Pb-W calorimeter for the Primex experiment at Jefferson Lab in Virginia
  • ​Designed and built cylindrical plastic scintillator array for observing cosmic ray angular distribution and other characteristics. Analysis performed in C.
  • President of the local chapter of the Society of Physics Students 
Powered by Create your own unique website with customizable templates.
  • Home
  • Resume
  • Publications