On Friday, March 8th, 2024, the TEIL Seminar Series featured guest speaker Mark V Albert, PhD, who led a discussion detailing his work using deep learning and artificial intelligence to enhance medical outcomes.
Dr. Albert received his PhD in computational biology from Cornell University. He currently holds the office of the Associate Chair of Graduate Studies for the Computer Science and Engineering (CSE) program at the University of North Texas, with a dual appointment in the Department of Biomedical Engineering and the Department of Computer Science and Engineering.
Dr. Albert serves as director of the Biomedical AI Lab at UNT and a co-coordinator for the university’s AI Summer Research Program. In 2022, Dr. Albert authored Bridging Artificial Intelligence and Human Intelligence, a research-based academic work that analyzes similarities between artificial intelligence models and humans to increase understanding and foster innovative collaborations between researchers and machine learning. His talk delves into his extensive background in leveraging deep learning for biomedical applications, highlighting impactful current and past projects.
Dr. Albert begins his discussion by delving into deep learning, a subset of machine learning that utilizes multi-layered artificial neural networks. Both fall under the broader AI umbrella, but a key advantage to deep learning lies in its ability to perform complex feature classification on input if its dataset is large enough. In contrast, machine learning requires human intervention to extract the most useful features in the data. For processes like facial recognition and speech recognition, this difference is significant.

Dr. Albert asserts that popular deep-learning systems are extraordinarily powerful. In the case of GPT-4, its capabilities are seemingly endless, and it performs many high-level tasks with impressive accuracy. On the other side of the coin, deep learning systems feature several disadvantages. Systems require extremely high amounts of data compared to machine learning and boast a more limited degree of explainability.
Requiring a larger dataset allows the machine to have a greater degree of accuracy when determining which pieces of data are most useful in feature classification, according to Dr. Albert. Deep learning systems, with the right data, can identify and analyze key features in complex problems with a higher degree of accuracy than human intervention. In response, he believes research strategies in the field of AI are shifting to emphasize the development of algorithms to identify useful data so that future projects can utilize deep learning capabilities to solve related problems.
Dr. Albert has employed artificial intelligence in biomedical research with great success. In collaboration with the Shirley Ryan AbilityLab, a research-driven rehabilitation center based in Chicago, he performed a study to evaluate the functionality of prosthetic legs with microprocessor-controlled knees when compared to mechanical knees. Instead of analyzing individual performance across a variety of factors and comparing the scores, his team opted to use an algorithm to perform principle component analysis (PCA) and yield a single score.
PCA, as a machine learning system, boasts the limitations of a linear analysis. After succeeding in analyzing overall performance utilizing principle component analysis, Dr. Albert developed a deep learning auto-encoding system. By accounting for nonlinear interactions, the jump from a machine learning system to a deep learning system captured an additional 14% of statistical variance. Dr. Albert’s research demonstrated that patients improved significantly with the more expensive microprocessor knee, justifying the additional cost for insurance companies and potentially increasing coverage. The project hopes to yield better patient outcomes by increasing access to higher-quality medical devices for those without the financial means to access them out-of-pocket.

Additionally, in collaboration with Shriner’s Hospitals for Children, Dr. Albert and his team utilized machine learning to analyze data collected from their motion analysis centers. For children with cerebral palsy, data was collected before and after undergoing surgical procedures to improve their gait. Dr. Albert’s team used machine learning to sift through years of data points collected across hundreds of metrics utilized by the motion analysis centers. His efforts culminated in the development of the Shriners Data Index, a deep learning model that examines multiple metrics and assigns a single score, making it easier for clinicians to assess the broader picture of gait function. As the model garners additional data from Shriner’s sites across the country, this holistic measure aims to improve outcomes for patients with gait differences.
Dr. Albert concluded his presentation by encouraging his audience to consider inventive methods of utilizing artificial intelligence and deep learning to increase efficiency in medical research. As deep-learning models continue to advance, questions remain about how to best harness the immense power of these systems to improve health outcomes. Before exiting, audience members from departments across the SMU campus took the opportunity to break off into discussion groups to further discuss and explore avenues for interdisciplinary collaboration.
If you are interested in collaborating with peers in technology-enhanced learning, immersive learning, and AI/machine learning spaces, join us for our next TEIL Seminar on Friday, April 19th, from 12 p.m. to 1:30 p.m. in Harold Clark Simmons Hall, Room 116, and on Zoom. The speaker will be Candace Walkington, PhD, from the SMU Department of Teaching and Learning.
For more information, visit our website www.smu.edu/teil.