It's a skunkworks for Generative AI resources at the Corcoran School of Arts and Design!
We started with a few possible low-hanging fruits in May of 2025:
ComfyUI became our point of departure. It is an open-source project that provides a visual user interface for creating custom workflows for generative AI models. It may be of particular interest for creators in the CSAD community as it opens up lots of possibilites, including text-to-image, image-to-video, image-to-3D, and audio - all within a single, visual framework. If you have used apps like MAX/MSP, Pure Data, Touch Designer, or Grasshopper, the node-based interface will be familiar.
There are a number of ways to run ComfyUI, both locally and over networks. Advanced users can install it via the terminal from ComfyUI's GitHub repository. For less advanced users, there are standalone versions for Mac and Windows users. However, performance will vary depending on the user's computer, and will generally be slower than online services like RunComfy which typically charge subscription fees.
With that in mind, James Huckenpahler and Dhwanil Mori have been working with the Research Technology Services (RTS) team to determine if it is feasible to run ComfyUI using High Performance Computing (HPC) resources. The aim is to provide ComfyUI on high-performing hardware as a virtual machine, for sudents and faculty to conduct researcd ch into the appicability of generative AI in the arts. This repository captures our notes.
A demo video made for program heads (and anyone else who might be interested) can be viewed here.
More recently, we presented to GW Coders; a video recording can be found here
Our summer work was made possible with support from RTS, and from Dr. Ryan Watkins, through the Trustworthy AI in Law and Society (TRAILS) initiative to do research titled, "Operationalizing Trustworthy AI: LLM Development in Academia."
