Projects
Train and Crowd Simulation
I am currently working on an agent-based model of Metro Manila's three rapid transit train systems
(LRT-1, LRT-2, and MRT-3). The model comprehensively and simultaneously simulates
both train
operations and passenger crowds within each train system. The goal of the model is to allow for the
observation and analysis of how the deployed trains and the passengers as crowds influence each
other. The model was validated against empirical smart card trip data and video recordings of stations
to ensure fidelity to the real-world train systems.
To support the high level of integration between train and crowd dynamics, numerous
techniques have been used to optimize the model such as parallelization of agent simulation,
precomputing of paths and routes, and caching of commonly-computed results. The model is made with Java,
using JavaFX as the graphical user interface (GUI) framework. The model parameters are stored in an
SQLite relational database.
I started developing this model in December of 2019 for my Master's thesis under the tutelage of
Unisse
Chua
and Dr. Briane Samson under the Center for Complexity and
Emerging Technologies (COMET) laboratory.
The research has been awarded the
gold medal for outstanding thesis by the College of Computer Studies (CCS) in
September 2021. In December 2021, I presented this work at the 17th ERDT Conference and has been
awarded as the best
paper among Information and Communications Technology (ICT) topics. This system
has since been incorporated with the System for Optimized Routing for
Transport (SORT) project of the Department of Science and Technology (DOST). As part of the SORT
team, we plan on extending this model to support even higher and more comprehensive levels of
integration (such as the simulation of all train systems at once), as well as for the support of
more transport modes.
GitHub: https://github.com/dlsucomet/trainsim
Technical Paper: Investigating the
Interaction Between Crowd Dynamics and Train Operations Through Agent-Based Modeling
ViTune
I worked on a prototype for a music visualizer made for the Deaf and Hard of Hearing (DHH) community.
The system aimed to augment the musical experience of the DHH community through the visualizations
produced by the prototype. The visualizations were designed according to principles set in related
literature as well as with direct consultations with the DHH community. The prototype was incrementally
improved through three iterations.
The prototype was made in Python using Django as its framework. Audio processing techniques such as beat
detection, fast Fourier transforms (FFT), and the use of spectrograms to detect the notes of .wav music
files were utilized to produce the visualizations.
Together with Toei Ciriaco, Carlo Eroles, and Hans Lee, we developed the prototype over the course of a
year starting August of 2018 and finishing August of 2019 as part of our requirements for a
Bachelor's degree in Computer Science. We were advised by Jordan Deja under the Center for Complexity and Emerging Technologies (COMET)
laboratory. We were in close collaboration with interpreters and the DHH community through the
School of Deaf Education and Studies
(SDEAS) of the De La Salle - College of Saint Benilde. In April 2020, the tool was featured as part of the Late-Breaking Works for the ACM CHI Conference on Human Factors in Computing Systems Conference.
GitHub: https://github.com/AlxDt/vitune
Publication: ViTune: A Visualizer Tool to Allow
the Deaf and Hard of Hearing to See Music With Their Eyes