I am an associate professor in the department of Computer Science at the University of New Mexico and director of the Data Science Laboratory . My research interests span the intersection of Machine Learning, High Performance Computing, Big Data, and their applications to interdisciplinary problems.
The goal of my research program is to solve computationally intensive and data intensive problems in science, health, and education, especially in scenarios where resources and trained professionals are scarce. I believe that a computer is only as good as the difference it can make in the world, and I strive to achieve this level of impact with my work.
I obtained a PhD in computer science from University of Delaware where I worked with Dr. Michela Taufer on integrating application-aware self-management to global distributed computing environments with the final goal of making them accessible to a wider scientific community. I also got a M.S in computer science from INAOE, Puebla with Dr Olac Fuentes, and a B.S in informatics from the Universidad de Guadalajara, Jalisco.
I'm originally from Guadalajara, a beautiful city in the western-pacific side of Mexico. My name, Trilce, (pronounced in english like tree-il-sˈe)) is a neologism invented by peruvian poet Cesar Vallejo.
Introduction to Computer Programming is a gentle and fun introduction to the art of programming. Students will learn the fundations on how to think like a computer scientist, design programs, and implement solutions in a high level programming language. Projects will combine computing, art, and imagination to engage students with core principles of programming.
As intelligent systems become pervasive and data production grows at a rate never seen before, the ability to understand, analyze, and automatically learn from data is becoming a crucial technical skill. In this course we cover principles and practice of Machine Learning, that is, systems that mprove performance on specific tasks through experience. The course balances theory and practice to provide students with the fundamentals of statistical learning as well as with hands-on experience building predictive systems.
In this course we study key data analysis and management techniques, which applied to massive datasets are the cornerstone that enables real-time decision making in distributed environments, business intelligence in the Web, and scientific discovery at large scale. In particular, we examine the map-reduce parallel computing paradigm and associated technologies such as distributed file systems, no-SQL databases, and stream computing engines. Additionally we review machine learning methods that make possible the efficient analysis of large volumes of data in near real time.
In this course explores the design, experimentation, testing, and pitfalls of empirical research in Computer Science. In particular, students will learn how to use a data-driven approach to understanding computing phenomena, formulate hypotheses, design computing experiments to test and validate or refute said hypotheses, evaluate and interpret empirical results. Overall, the goal of this course is to provide the students with the foundations of rigorous empirical research
This project seeks to enablin accurate and general distributed learning in domains where data collection is spread across multiple physical locations. Of particular relevance, are scenarios where raw data communication is infeasible because of its volume or because of privacy and security issues. This project presents a comprehensive approach to handle data-to-knowledge extraction, representation, and learning at scale
This project tackles the data challenge of data analysis of molecular dynamics simulations on the next-generation supercomputers by: (1) creating new in situ methods to trace molecular events such as conformational changes, phase transitions, or binding events in molecular dynamics simulations and (2) designing new data representations to build an explicit global organization of structural and temporal molecular properties.
This project is researching a new approach to optimizing resource allocation in cyber-infrastructure systems based on computational modeling and optimization of system performance. The models, analyses, and optimization techniques researched in this work will enable software cyber-infrastructure, applications, and system architects to make effective end-to-end performance tradeoffs, increasing the efficiency of important strategic computing systems.
The New Mexico SMART Grid Center will develop research capacity and education programs to support a modern electric grid built on the principles of distribution feeder microgrids (DFMs), and empower a diverse, next-generation workforce through industry partnerships, education, and public outreach.
The RobustScience project gathers interdisciplinary communities to define a roadmap to robust science using high-throughput applications for scientific discovery. High-throughput applications combine multiple tasks into increasingly complex workflows on heterogeneous systems. Robust science should assure performance scalability in space and time; trust in technology, people, and infrastructures; and reproducibility or confirmable research in high-throughput applications.