This was a trial session – Julian was experimenting with some AI image tools and invited participants to join in the fun.
We explored AI text-to-image generation using the open-source stable diffusion.
Then we took some example footage to try an AI powered green screen tool: RVM.
- worked quite well, using the colab notebook
- had to reduce uploaded video size, otherwise it just got stuck
- a little unreliable; perhaps colab GPU availability is low during regular daytime?
- would be a good tool to further investigate developing a script for, to make it much easier for students to generate screen screen matts
- need to compare this to the latest free version of Davinci Resolve and exporting 3D keyer content from the Fusion tab
The green screened content was then split into individual (numbered) frames using
Then participants modified the first image (e.g. caricaturized their own image) and ran it through ebsynth.
The results were impressive for the amount of time taken to create keyframes.
- look into a more friendly means of generating the individual frames;
ffmpegis wonderful, but not the easiest for beginners
- possibly resolve or VideoLan
- learn how to use a better drawing program- e.g. krita