BLOG.LAPLANTE https://seanlaplante.com A place for words Mon, 09 Feb 2026 19:20:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://seanlaplante.com/wp-content/uploads/2021/10/cropped-favicon-3-32x32.png BLOG.LAPLANTE https://seanlaplante.com 32 32 Adding Undefeatable Protection to Any Process Using Virtual Machine Introspection https://seanlaplante.com/2026/02/07/adding-undefeatable-protection-to-any-process-using-virtual-machine-introspection/ https://seanlaplante.com/2026/02/07/adding-undefeatable-protection-to-any-process-using-virtual-machine-introspection/#respond Sat, 07 Feb 2026 21:04:46 +0000 https://seanlaplante.com/?p=266 In my last post I talked about the current state of open source virtual machine introspection (VMI) and how I’d like to take a stab at bringing it back to life as best I can. In this post we’re going go over an example of just how powerful VMI can be as work towards cleaning up the IntroVirt repo and getting it into a better, more usable state, continues.

Since my post last week, a good amount of progress has been made. The original creator of IntroVirt (Steve!) is back on to help, quickly updating the KVM patch for a much newer Linux kernel (6.18) as well as getting started with testing on Windows 11 (which turns out to kind-of work). We should be seeing a push or PR for those changes shortly.

While he’s been doing that, I’ve been working on some cleanup, testing, bug fixes, automation, and patch improvements. libmspdb has a new version with some cleanup, and bug fixes as well as automated builds, testing, and releases using GitHub actions. kvm-introvirt has a bunch of fixes to the patch, some cleanup, automation in GitHub actions and support for 2 Proxmox (PVE) kernels. We also have a discord now and a roadmap!! It’s been a busy couple of weeks.

Finishing The VMCALL Example

As I was going through, cleaning things up, and planning out next steps, I noticed an examples directory in the IntroVirt repository with 2 example. One practically blank, and the other 99% complete. So naturally, we delete the blank one as if it never existed, and finish the other one: vmcall_interface.cc. I don’t want to just finish it and move on though, I want to make it cooler.

What is the point of vmcall_interface.cc? It’s probably best to understand it first. The vmcall_interface.cc example demonstrates how to create an IntroVirt tool that can receive commands from processes in the guest to perform actions on behalf of those processes that wouldn’t normally be possible without VMI. For example, you could make a command to elevate an unprivileged guest process to admin while completely bypassing all security controls. Or you could give guest processes a way to request additional protection from the hypervisor: preventing termination, or debugging. We’re going to go with the later for this example.

The fundamental thing at play here is the vmcall instruction. It’s a regular old assembly instruction like any other, but when it runs, it triggers a VM exit and then the hypervisor decides what to do. Hyper-V has them, KVM has them, Xen has them too, they all have them. What makes this so powerful is that we can write a tool that enables custom handling of vmcall instructions that work at any privilege level without recompiling or modifying the hypervisor (since the kvm patch for IntroVirt exposes that functionality already).

Let’s start with the in-guest component. We just need a simple user-mode application in C with an assembly stub to perform the vmcall instruction. Here’s a simple assembly stub that implements 2 capabilities (the one on GitHub has way more comments and features):

.code
; Reverse a NULL-terminated string
HypercallReverseCString PROC
    mov rax, 0FACEh
    mov rdx, rcx  
    mov rcx, 0F000h   
    vmcall           
    ret
HypercallReverseCString ENDP

; Protect a process from termination, debug, and modification
HypercallProtectProcess PROC
    mov rax, 0FACEh
    mov rcx, 0F002h
    vmcall
    ret
HypercallProtectProcess ENDP

Then we can use it with a little C program:

#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <windows.h>

extern uint64_t HypercallReverseCString(char *c_str);
extern uint64_t HypercallProtectProcess();

int main(int argc, char** argv) {
    uint64_t status = 0;
    char test_str[] = "Hello, IntroVirt!";

    printf("Original string: %s\n", test_str);
    status = HypercallReverseCString(test_str);
    if (status == 0) {
        printf("Reversed string: %s\n", test_str);
    } else {
        printf("Failed. Status code: %llu\n", status);
    }

    status = HypercallProtectProcess();
    if (status == 0) {
        while (1) {
            printf("This process is protected\n");
            Sleep(2000);
        }
    } else {
        printf("Failed. Status code: %llu\n", status);
    }
    return 0;
}

So now let’s explain what’s happening here. In our vmcall_interface tool we expose “service codes”. These are the “things” we can do and they map directly to the two assembly functions from above. We can reverse a null-terminated C-string and protect the calling process from debugging, termination, or injection. The service codes are defined in vmcall_interface.cc like this:

enum IVServiceCode { CSTRING_REVERSE = 0xF000, PROTECT_PROCESS = 0xF002 };

We can make as many service codes as we want, and we can implement them however we want. For this simple example we just use an enum to define our codes as arbitrary values. It is the responsibility of the in-guest process to execute the vmcall instruction with the appropriate service codes.

The complete version of vmcall_interface will have more service codes and the in-guest components may be more complicated. These snippets simply illustrate the core of what’s going on. Any additional service codes or functionality is just more of the same with tweaks to perform different actions or add additional protections.

To make a vmcall we simply need to get the vmcall instruction to run. That’s what our assembly stub is for. Then we need to pass in our service code and any other arguments in appropriate CPU registers. So, if that’s all, how does IntroVirt know about our vmcall? Well, there’s one more piece. We need to set a register to a special constant value that let’s IntroVirt know this vmcall is an event that should be sent to an IntroVirt tool instead of the default logic of the KVM hypervisor. We can see a simplified version of this logic in the kvm-introvirt KVM patch:

int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
{
    unsigned long nr, a0, a1, a2, a3, ret;
    int op_64_bit;
    nr = kvm_rax_read(vcpu);
    if(nr == 0xFACE) {
        const uint64_t original_rip = kvm_rip_read(vcpu);
        if (vcpu->vmcall_hook_enabled) {
            kvm_deliver_vmcall_event(vcpu);
        }
        ++vcpu->stat.hypercalls;
        if (original_rip == kvm_rip_read(vcpu)) {
            return kvm_skip_emulated_instruction(vcpu);
        }
        return 0;
    }
//...the rest of kvm_emulate_hypercall() is just the normal
// non-introvirt code for handling hypercalls in KVM.

So we can see here, if we ensure the RAX register is set to 0xFACE when we make our vmcall then the handling will divert away from stock KVM into IntroVirt. In addition to the 0xFACE value we also need to pass in the service code and any arguments for the service we’re implementing. To reverse a C-String, we just need the service code and the pointer to the string. For the process protection we just need the service code. If we look at vmcall_interface.cc we can see the service code is expected to be in the RCX register.

switch (regs.rcx()) {
case CSTRING_REVERSE:
    return_code = service_string_reverse(event);
    break;
case PROTECT_PROCESS:
    return_code = service_protect_process(event);
    break;
}

and for the service_string_reverse we expect the pointer to the string to be in the RDX register. And with that, we now have enough information to actually understand the assembly stub from earlier:

.code
; Reverse a NULL-terminated string
HypercallReverseCString PROC
    mov rax, 0FACEh
    mov rdx, rcx  
    mov rcx, 0F000h   
    vmcall           
    ret
HypercallReverseCString ENDP

; Protect a process from termination, debug, and modification
HypercallProtectProcess PROC
    mov rax, 0FACEh
    mov rcx, 0F002h
    vmcall
    ret
HypercallProtectProcess ENDP

In both cases we put 0xFACE (0FACEh in this assembly syntax) in the RAX register. Then we put the service codes 0xF000 and 0xF002 in RCX. The only catch is in HypercallReverseCString where we do mov rdx, rcx. This requires some understanding of calling conventions in Windows, but to skip the full explanation, when we call HypercallReverseCString(test_str), the address of test_str ends up in RCX, and since we actually need that to be in RDX and the service code to be in RCX, we do a quick switch to re-arrange things so it’s all prepped for the vmcall.

As we’ve seen up to this point, when the hypervisor sees these vmcall instructions with those values in those registers, and event will be sent to any running IntroVirt tools attached to that guest VM. In this case, it will be our vmcall_interface tool which will parse the service code in the switch, case above and call one of service_string_reverse or service_protect_process, both of which are fairly straightforward:

int service_string_reverse(Event& event) {
    auto& vcpu = event.vcpu();
    auto& regs = vcpu.registers();
    try {
        // RDX has a pointer to the string to reverse
        guest_ptr<void> pStr(event.vcpu(), regs.rdx());

        // Map it and reverse it
        guest_ptr<char[]> str = map_guest_cstring(pStr);
        reverse(str.begin(), str.end());
    } catch (VirtualAddressNotPresentException& ex) {
        cout << ex;
        return -1;
    }
    cout << '\t' << "String reversed successfully\n";
    return 0;
}

int service_protect_process(Event& event) {
    auto& task = event.task();
    lock_guard lock(mtx_);
    protected_pids_.insert(task.pid());
    return 0;
}

The service_reverse_string function is the simplest example since it runs and completes everything it needs to do right there and when it returns the string is reversed.

The service_protect_process is harder to show in one snippet since it involves tracking the process and then monitoring system calls. In the snippet above we add the PID to our protected_pids_ variable. Then we need to handle the NtOpenProcess system call:

case SystemCallIndex::NtOpenProcess: {
    auto* handler = static_cast<nt::NtOpenProcess*>(wevent.syscall().handler());
    auto desired_access = handler->DesiredAccess();
    auto* client_id = handler->ClientId();
    const uint64_t target_pid = client_id->UniqueProcess();

    lock_guard lock(mtx_);
    if (protected_pids_.count(target_pid)) {
        if (desired_access.has(nt::PROCESS_TERMINATE) ||
            desired_access.has(nt::PROCESS_VM_WRITE) ||
            desired_access.has(nt::PROCESS_VM_OPERATION) ||
            desired_access.has(nt::PROCESS_CREATE_THREAD) ||
            desired_access.has(nt::PROCESS_CREATE_PROCESS) ||
            desired_access.has(nt::PROCESS_SET_INFORMATION))
        {    
            handler->ClientIdPtr(guest_ptr<void>());
        }
    }
    break;
}

NtOpenProcess is the precursor to basically anything that can be done to a process. It’s not possible to debug a process, read its memory, write its memory, terminate it, inject into it, or anything without first opening a handle to it. So this snippet shows that all we need to do is look for processes opening our protected process, check the access rights they are requesting, and if we don’t like it, simply change the ClientId paramter to a NULL pointer, which will result in an invalid parameter at the kernel level and the call will fail. Once that’s handled, we just have a snippet for catching terminate and we’re basically done:

case SystemCallIndex::NtTerminateProcess: {
    auto* handler = static_cast<nt::NtTerminateProcess*>(wevent.syscall().handler());
    if (!handler->will_return() || handler->target_pid() == wevent.task().pid()) {
        lock_guard lock(mtx_);
        protected_pids_.erase(wevent.task().pid())
        break;
    } else {
        lock_guard lock(mtx_);
        if (protected_pids_.count(handler->target_pid()) != 0) {
            // This process is protected. Deny termination
            handler->ProcessHandle(0xFFFFFFFFFFFFFFFF);
            break;
        }
    }
}

This way the process can still exit on it’s own but any other process trying to terminate it will fail. We achieve this by intercepting the NtTerminateProcess call and setting the process handle to an invalid handle value which will cause the call to fail. We check for self-terminate by checking for NtTerminateProcess calls that won’t return or who’s PID match the target process. Let’s see it in action:

The process can run and exit on its own, but trying to kill it in task manager fails. And, something unexpected happened while I was recording, it looks like MsMpEng.exe attempted to open the process and we blocked it. That’s kind-of funny since that’s Microsoft’s malware protection service. We really do have a lot of power to do whatever we want from the hypervisor.

]]>
https://seanlaplante.com/2026/02/07/adding-undefeatable-protection-to-any-process-using-virtual-machine-introspection/feed/ 0
Virtual Machine Introspection is Dead https://seanlaplante.com/2026/01/20/virtual-machine-introspection-is-dead/ https://seanlaplante.com/2026/01/20/virtual-machine-introspection-is-dead/#respond Tue, 20 Jan 2026 03:21:05 +0000 https://seanlaplante.com/?p=232 It’s very likely not actually dead. I think VMWare has some type of commercial VMI for anti-malware. Maybe other big corporations have internal solutions they’re not sharing. But in the open source world it’s pretty stale. IntroVirt, Drakvuf, kvm-vmi, HVMI, etc. all exist as a hodgepodge of libraries, kernel patches, and hacks for virtual machine introspection. They are all at varying levels of feature “completeness” with IntroVirt being the most feature-complete user-land library and abstraction layer by far – but only for Windows guests and only on Intel CPUs. The kvm-vmi project has the kvmi sub-project with the most complete KVM kernel patch – supporting Intel, AMD, and ARM, but it’s super outdated. Progress to merge into mainline Linux seems halted and the most up-to-date version targets the Linux 5.15 kernel.

Each one of the available projects is fairly outdated and/or lacking in one way or another. The core issue of the whole thing is just lack of mainline kernel adoption of VMI into KVM. I don’t know if I’m the right person to start that journey, I don’t know if I have the time….I’m not even sure I care enough. But I have some time today to start kicking some tires and I’m using this blog post as a place to keep notes more than anything else. I may get somewhere with this, I may not, I may abruptly stop and never return, but I started typing these paragraphs when I ran make -j$(nproc) bindeb-pkg and it’s STILL RUNNING! So we’re off to the races! ¯\_(ツ)_/¯ I don’t know what I’m trying to say. Let’s start with what I want:

In a perfect world:

  • VMI functionality would be adopted into the mainline kernel, enabled by some config like KVM_INTROSPECTION
  • VMI would be supported on 64-bit Intel, AMD, and ARM architectures (do we need 32-bit too?)
  • VMI WOULDN’T be some half-completed academic project for research – and it would actually be some production/commercial ready thing.
  • A user-land abstraction layer (WITH Python bindings) would exist to make tasks like getting system call traces, setting breakpoints, and reading and writing memory as easy as:
import vmi

def cb(*args):
    print("do syscall stuff idk...but the API should be stupid easy")

with vmi.attach("win10-x64") as iface:
    iface.set_syscall_hook(cb)
    iface.run_forever()  # catch cntr+c and exit or something...idk

Why though? (my kernel make command is still running by the way)

Am I the only one that thinks VMI is like the coolest thing ever? It enables you to do whatever you want to a guest and there’s not much the guest can do about it. You could make an out-of-guest debugger that is impossible to detect. You could strip encryption off practically anything (I’m thinking malware/ransomware). Imagine if we had a robust VMI library and ARM64 support. You could run Android in a VM, install your favorite app, and strip all of its protections and security and figure out how it really works! It’s a reverse engineers dream for anything from hacking to malware analysis with the right tool-set. With GPU pass-through and the right PV drivers you could even write undetectable cheats for games that run outside the guest. The possibilities are endless.

So with all that said, let’s start trying to revive VMI as best we can. I have some experience working on IntroVirt, so I’ll start there. The IntroVirt KVM patch is much smaller than kvmi so it’ll be an easier starting point. It doesn’t fully support AMD and doesn’t support ARM at all, but those are problems for future me (or possibly someone else….possibly YOU).

Start Simple

We’re going to start by identifying the latest kernel supported by kvm-introvirt (6.8.0-41) and try to build that kernel, unmodified on a fresh install of Ubuntu 24.04.3. But instead of building the Ubuntu kernel, we’ll go straight to the kvm source and work from there. The goal (I think) is to migrate away from maintaining a kernel patch for the Ubuntu kernel and instead maintain (and maybe someday get merged in) a patch in the most vanilla Linux kernel we can and then go from there. If we start from the most vanilla place, and distribute whole pre-built kernels and things – we can maybe get more people using it and can more easily port our patch into Ubuntu, Arch Linux, Proxmox, and others since those are all also based in part on the vanilla Linux kernel. It just feels like the right place to be.

So let’s go. Here’ some stuff I always install:

sudo apt-get install -y make cmake build-essential git vim tmux

And then we need some things to build the kernel

sudo apt-get install -y bc fakeroot flex bison libelf-dev libssl-dev dwarves debhelper

I like to keep things in a ~/git folder. Let’s put kvm there:

mkdir ~/git
cd ~/git
git clone git://git.kernel.org/pub/scm/virt/kvm/kvm.git
cd ./kvm
# Latest kvm-introvirt patch is for 6.8.0, this was the closest tag
git checkout tags/kvm-6.8-1

Now let’s copy our running kernel’s config as a starting point and then make sure some settings are on/off (I’m basing some of this on the kvm-vmi setup instructions with modifications for Ubuntu 24.04 and the fact that I’m not actually building the kvmi kernel.

# Copy our config
cp /boot/config-$(uname -r) .config
# disable kernel modules signature
./scripts/config --disable SYSTEM_TRUSTED_KEYS
./scripts/config --disable SYSTEM_REVOCATION_KEYS
# enable KVM
./scripts/config --module KVM
./scripts/config --module KVM_INTEL
./scripts/config --module KVM_AMD
# Set a version str so we can see it in grub easier as ours
./scripts/config --set-str CONFIG_LOCALVERSION -kvm6.8-1
# Stuff I disabled b/c of warnings like this:
# .config:11148:warning: symbol value 'm' invalid for ANDROID_BINDERFS
./scripts/config --disable ANDROID_BINDERFS
./scripts/config --disable ANDROID_BINDER_IPC
./scripts/config --disable SERIAL_SC16IS7XX_SPI
./scripts/config --disable SERIAL_SC16IS7XX_I2C
./scripts/config --disable HAVE_KVM_IRQ_BYPASS

Now let’s start the build and while it runs you should have time to live out the rest of your life and pass away.

make olddefconfig
make -j$(nproc) bindeb-pkg

Then, if we’re still living, we can install it

sudo dpkg -i ../linux-image-6.7.0-rc7-kvm6.8-1*deb

Let’s also update our grub settings so we get a menu at boot. This kernel won’t be the default when we reboot.

# Edit GRUB_TIMEOUT_STYLE to be menu
# Edit GRUB_TIMEOUT to be something like 5 or 10
sudo vim /etc/default/grub
sudo update-grub

Finally we can reboot (and make sure to pick the kernel we just built…make sure it boots and the computer works):

sudo reboot

And we’re back. Let’s confirm our kernel version and see if kvm works:

user@user-XPS-13-9340:~$ uname -r
6.7.0-rc7-kvm6.8-1+
user@user-XPS-13-9340:~$ sudo modprobe kvm-intel
user@user-XPS-13-9340:~$ sudo lsmod | grep kvm
kvm_intel             475136  0
kvm                  1409024  1 kvm_intel
irqbypass              12288  1 kvm

Yes? Seems so. Let’s make a Windows 10 VM (download an ISO from Microsoft).

sudo apt-get install -y virt-manager
sudo systemctl daemon-reload
sudo systemctl start libvirtd
sudo usermod -a -G libvirt $USER
newgrp libvirt
virt-manager

Using the UI, make a VM for the downloaded Windows 10 ISO and we’ll see that it boots at all and that will be good enough.

Huzzah! Good enough! I’ll go ahead and install Windows 10 just so I have a VM ready. But I think it’s safe to say this kernel works fine (at least for stock KVM). Now let’s see if the kvm-introvirt patch applies cleanly for this kernel version.

RECAP!

What have we done so far? – We built an older Linux kernel and booted into it. (Woooooow!)

What have we done so far for VMI? Nothing.

SecureBoot

All of the above assumes SecureBoot is off, since we didn’t sign the kernel we built. If you want SecureBoot on, let’s see what we can do about that (following this guide).

Change to your home directory and create a file called mokconfig.cnf with the contents (If you want to not store these files in the home directory then just make sure $HOME is set and remove the HOME line from the config file below):

# This definition stops the following lines failing if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = <YOURcountrycode>
stateOrProvinceName     = <YOURstate>
localityName            = <YOURcity>
0.organizationName      = <YOURorganization>
commonName              = Secure Boot Signing Key
emailAddress            = <YOURemail>

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,1.3.6.1.4.1.311.10.3.6
nsComment               = "OpenSSL Generated Certificate"

Replace YOUR* with appropriate values. Then run:

# Make keys
openssl req -config ./mokconfig.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"
# Convert to PEM
openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

At this point, I’m going to reboot, enable secure boot, and boot into the stock Ubuntu kernel (not the one I built). And then I’ll continue.

Import the DER

# Choose any password, it's just to confirm key selection
sudo mokutil --import MOK.der

Restart the system and at the blue screen of the MOKManager, select “Enroll MOK” and then “View key”. Confirm the key, continue, enter the password you just chose, and boot.

Confirm the new key is there:

sudo mokutil --list-enrolled

Sign the kernel, make a copy of initrd for the signed kernel, and update-grub (re-do this step only when you have a new kernel installed to sign).

sudo sbsign --key MOK.priv --cert MOK.pem /boot/vmlinuz-6.7.0-rc7-kvm6.8-1+ --output /boot/vmlinuz-6.7.0-rc7-kvm6.8-1+.signed
sudo cp /boot/initrd.img-6.7.0-rc7-kvm6.8-1+{,.signed}
sudo update-grub

Reboot and at the grub menu, select the signed kernel and it should boot.

Applying a patch

Now that we’ve accomplished basically nothing for VMI, let’s see if we can apply the kvm-introvirt patch to the stock Linux kernel for the same version (I literally cannot imagine a scenario where this doesn’t work).

Start by installing quilt which we’ll need to apply the patch

sudo apt-get install -y quilt

Now we’ll need to clone kvm-introvirt and change to the folder containing the most up-to-date patch.

cd ~/git
git clone git@github.com:IntroVirt/kvm-introvirt.git
cd ~/git/kvm-introvirt/ubuntu/noble/hwe/6.8.0-41-generic

We are now existing inside of the folder with the most up-to-date patch for Ubuntu 24.04. Do I like how kvm-introvirt got restructured? No I do not. Blame me…it was me. But anyway, we now have to either move the kvm git repo to this folder we’re in now, or re-clone all of kvm here and checkout the right branch. I tried a symlink and quilt did not apply the patch. So anyways, let’s move it in:

mv ~/git/kvm ~/git/kvm-introvirt/ubuntu/noble/hwe/6.8.0-41-generic/kernel

Then we can try to apply the patch

quilt push -a

It WORKED! (I hope). It still needs to compile and then run.

So let’s do that. We’ve already done this so I won’t go into verbose detail:

# Remember we moved the kvm repo and named it kernel
# you could put it back now if you want
cd ./kernel
# So we can distinguish our kernel
./scripts/config --set-str CONFIG_LOCALVERSION -introvirt

# Make and install
make olddefconfig
make -j$(nproc) bindeb-pkg
cd ..
sudo dpkg -i linux-image-6.7.0-rc7-introvirt+*deb

# Sign for SecureBoot (if we have that on)
cd ~
sudo sbsign --key MOK.priv --cert MOK.pem /boot/vmlinuz-6.7.0-rc7-introvirt+ --output /boot/vmlinuz-6.7.0-rc7-introvirt+.signed
sudo cp /boot/initrd.img-6.7.0-rc7-introvirt+{,.signed}
sudo update-grub

# Reboot and select the signed IntroVirt kernel at grub
sudo reboot

What a ride. Now we actually have to install IntroVirt. Which we can do from source pretty easily. I’ll go quick:

# Clone
cd ~/git
git clone git@github.com:IntroVirt/libmspdb.git
git clone git@github.com:IntroVirt/IntroVirt.git

# MS PDB
cd ./libmspdb/build
sudo apt-get install -y cmake libcurl4-openssl-dev libboost-dev git
cmake ..
make -j$(nproc) package
sudo apt install ./*.deb

# IntroVirt
cd ~/git/IntroVirt/build
sudo apt-get install -y python3 python3-jinja2 cmake \
    make build-essential libcurl4-openssl-dev \
    libboost-dev libboost-program-options-dev \
    git clang-format liblog4cxx-dev libboost-stacktrace-dev \
    doxygen liblog4cxx15
cmake ..
make -j$(nproc) package
sudo apt install ./*.deb

Hopefully that all goes well and now we can boot up our VM and test things out.

# Show IntroVirt installed and KVM recongnized
sudo ivversion
# Get guest info (OS version etc...)
sudo ivguestinfo -Dwin10
# Systemcall trace
sudo ivsyscallmon -Dwin10

GREAT SUCCESS! And that’s it for today. The state of VMI is right where I left it. Hopefully I pick this up against next week.

]]>
https://seanlaplante.com/2026/01/20/virtual-machine-introspection-is-dead/feed/ 0
Hacking My Chevy Volt to Auto-Switch Driving Modes for Efficiency https://seanlaplante.com/2021/11/13/hacking-my-chevy-volt-to-auto-switch-driving-modes-for-efficiency/ Sat, 13 Nov 2021 23:11:16 +0000 https://seanlaplante.com/?p=166 Let’s talk about my car. I have a Chevy volt which is a Plugin-in Hybrid Electric Vehicle (PHEV). On electric alone, it gets about 50 miles of range in ideal conditions and then it has a gas tank to take you the rest of the way on long trips (about 350 miles from gas). The car also has a handful of modes (as do most cars). I won’t bother going into them all here since I just want to focus on two: “normal mode” and “hold mode”. In normal mode, the car uses the electric range first and then switches to gas. In hold mode, it switches the car to the gas engine and “holds” the battery where it is. This allows you to save your remaining electric range for later which is useful on long trips since the electric motor is more efficient below 50 MPH.

So what’s the problem? When I take long trips that exceed the electric range of my car, I want to use the electric range as efficiently as possible. To do that, I try to end the trip with 0 miles of electric range and use the electric engine anytime I’m traveling under 50 MPH and the gas engine for over 50 MPH. This is easy enough to do by hand, but it also seems like an extremely easy thing for the car to do automatically. Chevy could have added another mode called “trip mode” that would do exactly that, but they didn’t, and that’s frustrating. Also, after owning the car for almost 5 years, I find I’m much more forgetful on long trips when it comes to switching engine modes, and I’ll regularly end up on the highway burning through my electric range without noticing.

Designing “Trip Mode”

So, in this blog post, I’m going to be designing and implementing “trip mode” for my Chevy Volt. The final code, setup, and usage instructions will be available in the GitHub repository for this blog post. First, let’s talk about what we need the mode to do at a minimum:

  1. When the car starts, something needs to prompt: “Are you taking a long trip?”.
  2. In trip mode, when traveling under 50 MPH, the car should be in normal mode.
  3. When traveling over 50 MPH, the car should be in hold mode.
  4. There should be a delay or cooldown to prevent mode switching too frequently.
  5. The speed threshold should be configurable.

I have a Raspberry Pi laying around along with a compatible 7 inch touchscreen, so that will work great as an interface to prompt for enabling “trip mode”. To talk to and control the car, I will use a Gray Panda from Comma AI. Comma AI doesn’t sell the gray panda anymore but any of the Panda colors will work fine. For this project the $99 white panda would be enough. There are probably other OBD-II devices that could be used (I’m honestly not sure).

Hardware List

Here is the complete hardware list:

  1. Raspberry Pi 4B 4GB or equivalent.
  2. Any Panda should work.
  3. A male-to-male USB type-A cable to connect the Pi to the Panda (just Google and pick one).
  4. A touchscreen for the raspberry Pi to turn trip mode on and off (I also have the case for the touchscreen).
  5. Some way to mount it somewhere in your car if you want.

I didn’t have a male-to-male USB type-A cable laying around, but a USB type-A to type-C cable will work if you have a male type-C to male type-A adapter like I do:

Makeshift USB type-A male-to-male cable

Once I had the hardware, I connected it up like this:

Back wire connection
Front wire connections
Panda connected to OBD-II

The image on the left shows the back of the Raspberry Pi screen case all loaded up with the Pi and everything. The Panda is just plugged into one of the Raspberry Pi USB ports and the white cable provides power. The white cable can be plugged into either USB port available in the center dash on the Volt. My Raspberry Pi complained about being underpowered sometimes but it didn’t seem to effect anything. The image on the right shows the Panda plugged into the OBD-II port to the left of the steering wheel down towards the floor. My cables were all long enough so I could rest the pi above the Volt’s built-in touch screen. I haven’t decided where or how I’m going to mount this thing yet. There is also probably a smaller screen you could get for the pi, or you could just wire up a single button. This was the hardware I had laying around so everything I chose was chosen to avoid having to buy anything new.

Interfacing with the Panda from a Raspberry Pi

It’s possible to interface with the Panda via USB from a computer as well, but since the Pi was going to need to talk to it eventually, I just used the Pi as the main interface. The Panda also appears to have Wi-Fi but I didn’t explore that at all. My workflow and coding setup was this:

  1. Pre-configure the Raspberry Pi to connect to my home Wi-Fi.
  2. Get in my car, turn it on, and plug everything in.
  3. SSH to the Pi from my laptop and do all the work from there, transferring files to and from the Pi via SCP as needed.

This is not the best, most comfortable, or most efficient setup, but I was happy with it and it never annoyed me so much that I felt compelled to make it easier.

Preparing the Raspberry Pi

The Panda requires Python 3.8 to build and flash the firmware, which we will need to do later, so let’s sort that out now. As of this writing, Python 3.8 is not the default Python on Raspbian. So you can either flash the Pi with Ubuntu (I didn’t try this) or you can follow the instructions here for getting Python 3.8 on the pi. Once done, type python --version at the prompt to ensure the default Python is Python 3.8. Also make sure pip --version says it’s using Python 3.8.

Reading data from the CAN bus

We’ll start by writing a simple Python script that will read data from the CAN bus and print it out as a messy jumble of unreadable nonsense (exactly like the example provided in the README for the Panda on GitHub).

First, we have to setup some udev rules so that, when the Panda is detected, the device mode is given permissions 0666 instead of the default 0660 (e.g. allow everyone read/write, not just the owner and group).

sudo tee /etc/udev/rules.d/11-panda.rules <<EOF
SUBSYSTEM=="usb", ATTRS{idVendor}=="bbaa", ATTRS{idProduct}=="ddcc", MODE="0666"
SUBSYSTEM=="usb", ATTRS{idVendor}=="bbaa", ATTRS{idProduct}=="ddee", MODE="0666"
EOF
sudo udevadm control --reload-rules && sudo udevadm trigger

It’s fine if the Panda was already connected, that’s what the udevadm trigger is for. If you’re curious, you can check dmesg when you plugin the Panda and you should see something like this:

[ 1645.769508] usb 1-1.3: new full-speed USB device number 5 using dwc_otg
[ 1645.906006] usb 1-1.3: not running at top speed; connect to a high speed hub
[ 1645.916768] usb 1-1.3: New USB device found, idVendor=bbaa, idProduct=ddcc, bcdDevice= 2.00
[ 1645.916796] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 1645.916812] usb 1-1.3: Product: panda
[ 1645.916826] usb 1-1.3: Manufacturer: comma.ai                                        
[ 1645.916841] usb 1-1.3: SerialNumber: XXXXXXXXXXXXXXXXXXXXXXXX

You can also run lsusb to see the bus and device number given to the Panda (it will be the one with ID bbaa:ddcc or bbaa:ddee).

Bus 001 Device 005: ID bbaa:ddcc

This tells us the device path for the Panda is /dev/bus/usb/001/005, and sure enough if I do an ls -l on /dev/bus/usb/001 I see device 005 has the rw permission for owner, group, and other, which is exactly what the udev rule above was supposed to do:

crw-rw-rw- 1 root root 189, 4 Nov  8 17:00 005

With all of that unnecessary verification out of the way, we should now be able to interface with the Panda. So let’s first install the required Python dependency.

pip install pandacan

Then, from within a Python shell, you should be able to run the following commands and get a wall of nonsense printed back after the can_recv step:

from panda import Panda
>>> panda = Panda()
>>> panda.can_recv()

Reading The Car’s Current Speed

In order to switch modes based on the car’s current speed, we will need to monitor how fast the car is going. To figure out which messages on the CAN bus contain the car’s speed and how to parse it out would require a lot of trial and error and reverse engineering. Thankfully, much of that has already been done. Comma AI’s Open Pilot is already a fairly mature system that can add auto-drive features to a surprising number of cars, so they’ve already figured a lot of this stuff out. There’s also plenty of resources from the car hacking community that break down several different message types for different cars. A lot of good reversing info on the Volt (and other cars) can be found here and here. However, the best resource I found, once I learned how to decipher it, was the opendbc repository from Comma AI.

Let’s see if we can find any information about how to parse the vehicle speed from the GM DBC files in opendbc. Looks like the file we want is gm_global_a_powertrain.dbc since it contains an ECMVehicleSpeed message definition with a VehicleSpeed signal definition in miles per hour. Sounds like exactly what we want:

BO_ 1001 ECMVehicleSpeed: 8 K20_ECM
 SG_ VehicleSpeed : 7|16@0+ (0.01,0) [0|0] "mph"  NEO

Now, these definitions are supposedly “human readable”, but that’s only true if you have a translator. This DBC_Specification.md on GitHub was extremely helpful with that. The first line that begins with BO_ is the message definition. The CAN ID is 1001 (0x3e9 hex), the total message length is 8 bytes and the sender is K20_ECM (I don’t know what that is besides an engine control module).

The line that begins with SG_ is the signal definition. This message has one signal, called VehicleSpeed, which starts at bit 7, is 16 bits long (2 bytes), big endian, unsigned, and a factor of 0.01 (meaning we’ll need to divide by 100), and is in units of miles per hour.

Let’s write a Python script to monitor the vehicle’s speed, print it to the terminal, and then go for a drive to test it out.

import sys
import struct

from panda import Panda

CURRENT_SPEED = 0.0


def read_vehicle_speed(p: Panda) -> None:
    """Read the CAN messages and parse out the vehicle speed."""
    global CURRENT_SPEED

    for addr, _, dat, _src in p.can_recv():
        if addr == 0x3e9:  # Speed is 1001 (0x3e9 hex)
            # '!' for big-endian. H for unsigned short (since it's 16 bits or 2 bytes)
            # divide by 100.0 b/c factor is 0.01
            CURRENT_SPEED = struct.unpack("!H", dat[:2])[0] / 100.0

        # Just keep updating the same line
        sys.stdout.write(f"\rSpeed: {int(CURRENT_SPEED):03d}")
        sys.stdout.flush()


def main() -> None:
    """Entry Point. Monitor Chevy volt speed."""
    try:
        p = Panda()
    except Exception as exc:
        print(f"Failed to connect to Panda! {exc}")
        return

    try:
        while True:
            read_vehicle_speed(p)
    except KeyboardInterrupt:
        print("Exiting...")
    finally:
        p.close()


if __name__ == "__main__":
    main()

That worked great! The vehicle’s speed is accurate to within 1 MPH of what’s displaying on the dashboard (which makes sense since the Pi is slow and the dashboard is also probably intentionally slow at updating).

Close enough!

Detecting Driving Mode Button Presses

Now that we’re monitoring the car’s speed, we can look into the mode button. To identify an unknown button using the Panda, there are some helpful scripts you can use: can_logger.py and can_unique.py. Instructions for how to use them to identify unknown button presses and other behavior can be found in the can_unique.md file. The basic procedure is this:

  1. Capture CAN bus traffic while not pressing the button and save that to a file
  2. Repeat step one but press the button a couple times
  3. Use can_unique.py to compare the files.

It’s important to log for long enough that you get rid of the bulk of the background noise. Thankfully though, looking at the DBC file from before, I found DriveModeButton which looks like what we need. The full message definition is below:

BO_ 481 ASCMSteeringButton: 7 K124_ASCM
 SG_ DistanceButton : 22|1@0+ (1,0) [0|0] ""  NEO
 SG_ LKAButton : 23|1@0+ (1,0) [0|0] ""  NEO
 SG_ ACCButtons : 46|3@0+ (1,0) [0|0] ""  NEO
 SG_ DriveModeButton : 39|1@0+ (1,0) [0|1] "" XXX

That’s a little confusing since the button is definitely not on the steering wheel, but it’s the right one. So parsing that out, the only signal we care about is the DriveModeButton which looks to be at bit offset 39, it’s one bit long, and is either a 1 or a 0. In Python we get the message data as a byte array, so bit 39 will be the 7th bit in the 4th byte of the message (I’m a programmer so all my numbers are indexed at 0). We can make a quick modification to monitor.py above and we’ll have it monitoring for engine mode button presses too:

CURRENT_SPEED = 0.0
BUTTON = 0
PRESS_COUNT = 0


def read_vehicle_speed(p: Panda) -> None:
    """Read the CAN messages and parse out the vehicle speed."""
    global CURRENT_SPEED
    global BUTTON
    global PRESS_COUNT

    for addr, _, dat, _src in p.can_recv():
        if addr == 0x3e9:  # Speed is 1001 (0x3e9 hex)
            # '!' for big-endian. H for unsigned short (since it's 16 bits or 2 bytes)
            # divide by 100.0 b/c factor is 0.01
            CURRENT_SPEED = struct.unpack("!H", dat[:2])[0] / 100.0
        elif addr == 0x1e1:  # ASCMSteeringButton
            # Check if the 7th bit of byte 4 is a 1
            if int(dat[4]) >> 7 & 1:
                BUTTON = 1
            elif BUTTON == 1:
                # Increment the press count on button release
                PRESS_COUNT += 1
                BUTTON = 0
            else:
                BUTTON = 0

        # Just keep updating the same line
        sys.stdout.write(f"\rSpeed: {int(CURRENT_SPEED):03d} Button: {BUTTON}")
        sys.stdout.flush()

Sending CAN Messages

Now that we can see when the mode button is pressed, we need to figure out how to actually send the button press ourselves. The first thing we need to do before we can get started is flash the Panda with a debug build of its firmware. In release, the panda won’t send anything, even if you tell it to, unless you enable one of the supported safety models. They have a safety model for each supported car, but those safety models only allow sending messages that they require for auto-driving and the engine mode button is not one of them. Once we flash it with a debug build, we’ll be able to use the SAFETY_ALLOUTPUT safety model (more like a danger model am I right?).

Flashing the Panda from a Raspberry Pi

To flash the Panda from my Raspberry Pi, the only thing I got hung up on was the Python 3.8 issue (noted above). Once you have Python 3.8, flashing the panda should be pretty easy:

  1. Clone the Panda GitHub repository
  2. Change into the board folder and run get_sdk.sh
  3. Once complete, run scons -u to compile
  4. Unplug the Panda from the OBD-II port on the car
  5. Power-cycle it by unplugging it from the Pi and plugging it back in (probably not required but for good measure).
  6. Once you see a slowly flashing blue light you can proceed with the final step
  7. Finally, with the Panda connected to the Pi but NOT connected to the OBD-II port, run flash.sh

Once flashed, verify the version by running the following in a Python shell:

>>> from panda import Panda
>>> p = Panda()
opening device XXXXXXXXXXXXXXXXXXXXXXXX 0xddcc
connected
>>> p.get_version()
'DEV-4d57e48f-DEBUG'
>>> p.close()

The hex digits between DEV and DEBUG should match the beginning of the slug for the git commit you have checked out. We’re in DEBUG mode now, training wheels are off. Be careful and don’t drive yourself off a cliff. Once we sort out what we actually need to write out in order to change the driving modes, we can modify the Panda firmware to add a new safety model that only allows sending that driving mode button press.

Sending Driving Mode Button Presses

We have the DBC message definition for the drive mode button and we can see when it’s pressed. All we should have to do is enable the bus, set the safety model to allow everything, and blast out a message with a 1 in the correct place. According to the definition of ASCMSteeringButton above, the full message is 7 bytes. So we can start with 7 bytes of 0x00:

message = bytearray(b'\x00\x00\x00\x00\x00\x00\x00')

Now we need a 1 in the 7th bit of the 4th byte. You can figure that out however you want. One easy way is to just type it into a programming calculator:

So that makes 128 in decimal or 0x80 in hex. So we can just put 0x80 in the 4th byte (indexed at 0) and that should do it:

message = bytearray(b'\x00\x00\x00\x00\x80\x00\x00')

That’s our “drive mode button press” message. Now we have to:

  1. Set the safety mode: p.set_safety_mode(Panda.SAFETY_ALLOUTPUT)
  2. Enable output on CAN bus 0 which is the powertrain (ref): p.set_can_enable(0, True)
  3. Flush the Panda’s buffers for good measure: p.can_clear(0xFFFF)
  4. Call p.can_send with the message ID (0x1e1), message, and a bus ID of 0 (for powertrain)

Putting it all together we have a prototype send.py to send the button press every second. We’ll use the press_count later to get us to the mode we want. For now, because of the while True we’ll just end up sending a button press every second.

import time

from panda import Panda


def send_button_press(p: Panda, press_count: int = 2) -> None:
    """Send the ASCMSteering DriveModeButton signal."""
    msg_id = 0x1e1  # 481 decimal
    bus_id = 0
    message = bytearray(b'\x00\x00\x00\x00\x80\x00\x00')  # 0x80 is a 1 in the 7th bit

    for press in range(press_count):
        p.can_send(msg_id, message, bus_id)
        print(f"Sent press {press + 1}")
        time.sleep(1)


def main() -> None:
    """Entry Point. Monitor Chevy volt speed."""
    try:
        p = Panda()
    except Exception as exc:
        print(f"Failed to connect to Panda! {exc}")
        return

    try:
        p.set_safety_mode(Panda.SAFETY_ALLOUTPUT)  # Turn off all safety preventing sends
        p.set_can_enable(0, True)  # Enable bus 0 for output
        p.can_clear(0xFFFF)  # Flush the panda CAN buffers
        while True:
            send_button_press(p)
    except KeyboardInterrupt:
        print("Exiting...")
    finally:
        p.close()


if __name__ == "__main__":
    main()

SUCCESS!!!

That’s not me!

Implementing Trip Mode

All the hard parts are done, now we just have to create a script that will change the mode based on a set of rules. First it’s important to talk about the behavior of the mode button to make sense of why I wrote the code the way I did.

  1. The mode button press is not registered until it is released. You can hold it down as long as you want and nothing happens in the car until the button is no longer being pressed.
  2. When pressed, the mode selection screen comes up and you have about 3 seconds where repeated button presses change the selected mode. Wait longer than 3 seconds and you need 1 button press just to re-activate the mode selection process.
  3. When activated, the highlighted mode is always “normal” regardless of which mode the car is currently in (this is a good thing for us).

Since the highlighted mode is always “normal” after the initial button press, we don’t have to monitor for button presses by the user. If the user changes the mode on us, our code will still work, since the highlighted mode always starts with “normal”. This also means we only need 1 button press to switch back to normal mode at any time. In fact, if we store our modes in a list, the number of button presses needed to get to any mode will be its index plus 1.

Now, since button presses aren’t registered until release, it may seem like we need to follow each “press” signal with a “release” signal, but we actually don’t need to. Some other module in the car (not us) is still sending the ASCMSteeringButton message with the DriveModeButton signal set to 0 (when not pressed). When we blast out our message with DriveModeButton set to 1, we’re drowning out those 0’s in a way (I’m not an expert on CAN bus stuff). Once we stop sending that “1”, the module that’s supposed to send the message will continue sending “0”, since the button isn’t being pressed, and that will work as our button release.

Alright now we can dive into the code. First we have to create our Panda object and disable the safety and all that:

try:
    p = Panda()
    p.set_safety_mode(Panda.SAFETY_ALLOUTPUT)  # Turn off all safety preventing sends
    p.set_can_enable(0, True)  # Enable bus 0 for output
    p.can_clear(0xFFFF)  # Flush the panda CAN buffers
except Exception as exc:
    logging.error(f"Failed to connect to Panda! {exc}")
    return

Next, I created a CarState (not the best name) class to handle the rest. This class will take the Panda object and it will have an update method that we have to call in a forever loop. The update method will read CAN messages, set the speed, check if we need to change modes, and send button presses to switch modes if needed.

def update(self):
    """Update the state from CAN messages."""
    for addr, _, dat, _src in self.panda.can_recv():
        if addr == 0x3e9:  # Speed is 1001 (0x3e9 hex)
            # '!' for big-endian. H for unsigned short (since it's 16 bits or 2 bytes)
            # divide by 100.0 b/c factor is 0.01
            self._set_speed(struct.unpack("!H", dat[:2])[0] / 100.0)

        now = time.perf_counter()
        if self.pending_sends and now > self.allow_button_press_after:
            self.allow_button_press_after = now + self.BUTTON_PRESS_COOLDOWN
            send = self.pending_sends.pop(0)
            self.panda.can_send_many(send)

Similar to our monitor.py script, we loop over the messages and update the speed for the class. Then we check if there’s anything to send and if we’ve waited long enough before sending the next button press. If we send them too quickly they will be registered as if the button is being held down. You may also notice that we’re using can_send_many instead of can_send now. This is because, while driving, there’s a bunch more happening on the CAN bus than when sitting in my driveway. I noticed in testing that sometimes button presses were being missed when I was just sending a single message. So now I group 50 “press” messages and call can_send_many which blasts them out. This has the effect of making it look like the button is being held down for a moment before release (like a real human).

When the speed is set, we check if we’ve crossed the threshold and need to switch modes.

def _set_speed(self, speed):
    """Set the current speed and trigger mode changes."""
    speed = int(speed)
    if self.speed > self.speed_threshold and speed < 1:
        # HACK: Speed jumps to 0 b/w valid values.
        # This hack should handle it.
        return

    self.speed = speed
    if self.speed > self.speed_threshold and self.mode == "NORMAL":
        logging.debug(f"Speed trigger (attempt HOLD): {self.speed}")
        self._switch_modes("HOLD")
    elif self.speed <= self.speed_threshold and self.mode == "HOLD":
        logging.debug(f"Speed trigger (attempt NORMAL): {self.speed}")
        self._switch_modes("NORMAL")

There is a HACK in there that I don’t love but it works. If the previous speed is over the threshold (which will always be some high speed like 50 MPH or more), we check if the speed being set is less than 1 (and ignore if it is). This happens pretty constantly. It must be that the current speed from the ECMVehicleSpeed message isn’t always valid. I’m not sure how to tell when it’s valid or not, but the only way this hack will cause an issue is if you go from 50 MPH to 0 MPH in less than a fraction of a second. If that happens I don’t think we need to worry about switching the engine mode.

Finally, the _switch_modes method takes the requested mode, checks if we’re past the cooldown, and then updates our list of pending_sends with the button presses that need to be sent by the update method.

def _switch_modes(self, new_mode: str) -> None:
    """Send the messages needed to switch modes if past our cooldown."""
    now = time.perf_counter()
    if now <= self.allow_mode_switch_after:
        return

    logging.info(f"Switch to {new_mode} mode. Speed: {self.speed}")

    # Update our cooldown and mode
    self.allow_mode_switch_after = now + self.MODE_SWITCH_COOLDOWN
    self.mode = new_mode

    # Required presses starts at 1 (to activate the screen) and
    # mode selection always starts on NORMAL.
    required_presses = 1 + self.DRIVE_MODES.index(new_mode)
    logging.debug(f"Needs {required_presses} presses")
    for _ in range(required_presses):
        cluster = []
        for _inner in range(self.SEND_CLUSTER_SIZE):
            cluster.append([self.MSG_ID, None, self.PRESS_MSG, 0])
        self.pending_sends.append(cluster)

If we already switched modes recently, this method just returns. Since it doesn’t update the current mode, the _set_speed method will trigger it over and over until the mode switches or the car begins traveling a valid speed for the mode it’s already in. When the mode does switch, we compute the number of presses by the list index of the mode plus 1 (as discussed earlier) and then we update pending_sends. The clusters are groups of 50 “press” messages that are sent all at once with can_send_many so that we make sure our button press is long enough to be registered by the car.

That covers all the logic of the CarState class, the rest is just initialization. The full tripmode.py file can be found here: https://github.com/vix597/chevy-volt-trip-mode

Wrap it Up!

This blog post is probably too long but we’re almost done. What’s left?

  1. Create a GUI with an “on” and “off” button
  2. Install it on the pi and have it start automatically at boot

For the GUI I’m going to try PySimpleGUI which seems to be exactly what I need. The “simple” in PySimpleGUI is no joke either, I’ve barely added any code and I have a working GUI.

def main() -> None:
    """Entry Point. Monitor Chevy volt speed."""
    trip_mode_enabled = False
    car_state = None

    # Theme and layout for the window
    sg.theme('DarkAmber')
    layout = [
        [sg.Text('TRIP MODE')],
        [sg.Button('ON'), sg.Button('OFF')]
    ]

    # Create the Window (800x480 is Raspberry Pi touchscreen resolution)
    window = sg.Window(
        'Trip Mode', layout, finalize=True,
        keep_on_top=True, no_titlebar=True,
        location=(0, 0), size=(800, 480))
    window.maximize()  # Make it fullscreen

    # Event Loop to process "events" and get the "values" of the inputs
    while True:
        event, _values = window.read(timeout=0)  # Return immediately
        if event == sg.WIN_CLOSED:
            if car_state:
                car_state.close()
            break

        if event == "ON" and not trip_mode_enabled:
            trip_mode_enabled = True
            car_state = enable()
        elif event == "OFF" and trip_mode_enabled:
            trip_mode_enabled = False
            car_state.close()
            car_state = None

        if car_state:
            car_state.update()

    window.close()

I broke out the code for creating the Panda connection and CarState object into an enable method which returns the CarState. Then I added a method to CarState to close the Panda connection and that was it. The PySimpleGUI code is copy-pasted from the main example on their home page with the text input box removed and the title and button text changed.

I’m going to save “start at boot” along with other improvements for later. I think this blog post is plenty long and covers everything I wanted. Check out the finished project over on GitHub and try it out for yourself if you also have a Chevy Volt. Or don’t. #GreatJob

References

  1. https://vehicle-reverse-engineering.fandom.com/wiki/GM_Volt
  2. https://github.com/openvehicles/Open-Vehicle-Monitoring-System/blob/master/vehicle/Car%20Module/VoltAmpera/voltampera_canbusnotes.txt
  3. https://itheo.tech/install-python-3-8-on-a-raspberry-pi/
  4. https://github.com/commaai/panda
  5. https://github.com/commaai/openpilot
  6. https://github.com/openvehicles/Open-Vehicle-Monitoring-System/blob/master/vehicle/Car%20Module/VoltAmpera/voltampera_canbusnotes.txt
  7. https://pysimplegui.readthedocs.io/en/latest/
]]>
Creating an Icon From a Song https://seanlaplante.com/2021/10/27/creating-an-icon-from-a-song/ https://seanlaplante.com/2021/10/27/creating-an-icon-from-a-song/#comments Wed, 27 Oct 2021 04:53:35 +0000 https://seanlaplante.com/?p=66 My adventure into the world of blogging has been going for a week and a half now. I have a half-written how-to walking through how I setup this blog and now I’m working on this post (which will come out first). It will not be long before I’m a blogging expert! The one roadblock remaining is that this site has no favicon (at least prior to publishing this). This post aims to solve that problem. Once I have a favicon, I’ll really be a force in the blogging community. Unfortunately for me however, I’m not an artist and my wife is busy making a blanket. I’m okay at Python, so I’ll generate a favicon using that.

My idea is to generate my site’s favicon programmatically using a song as input. I will need to be able to read and parse an MP3 file and write an image one pixel at a time. I can use Pillow to generate the image but I’ll have to search around for something to parse an MP3 file in Python. It would be pretty easy to just open and read the song file’s bytes and generate an image with some logic from that, but I’d like to actually parse the song so that I can generate something from the music. Depending on what library I find, maybe I’ll do something with beat detection. When this is all said and done, you’ll be able to see the finished product on github. First, a few questions:

How big is the icon supposed to be?

Looks like when I try to add one to the site, WordPress tells me it should be at least 512×512 pixels.

Can I use Pillow to make an .ico file?

Yes, but that doesn’t matter because it looks like WordPress doesn’t use .ico files “for security reasons” that I didn’t bother looking into. I’ll be generating a .png instead.

Can I read/process .mp3 files in Python?

Of course! With librosa it seems.

Generating a .png file in Python

With all my questions from above answered, I can get right into the code. Let’s start with something simple; generating a red square one pixel at a time. We will need this logic because when we generate an image from a song, we’re going to want to make per-pixel decisions based on the song.

import numpy as np
from PIL import Image
from typing import Tuple

SQUARE_COLOR = (255, 0, 0, 255)  # Let's make a red square
ICON_SIZE = (512, 512)  # The recommended minimum size from WordPress


def generate_pixels(resolution: Tuple[int, int]) -> np.ndarray:
    """Generate pixels of an image with the provided resolution."""
    pixels = []

    # Eventually I'll extend this to generate an image one pixel at a time
    # based on an input song.
    for _row in range(resolution[1]):
        cur_row = []
        for _col in range(resolution[0]):
            cur_row.append(SQUARE_COLOR)
        pixels.append(cur_row)

    return np.array(pixels, dtype=np.uint8)


def main():
    """Entry point."""

    # For now, just make a solid color square, one pixel at a time,
    # for each resolution of our image.
    img_pixels = generate_pixels(ICON_SIZE)

    # Create the image from our multi-dimmensional array of pixels
    img = Image.fromarray(img_pixels)
    img.save('favicon.png', sizes=ICON_SIZE)


if __name__ == "__main__":
    main()

It worked! We have a red square!

A red square

Analyzing an MP3 file in Python

Since we’ll be generating the image one pixel at a time, we need to process the audio file and then be able to check some values in the song for each pixel. In other words, each pixel in the generated image will represent some small time slice of the song. To determine what the color and transparency should be for each pixel, we’ll need to decide what features of the song we want to use. For now, let’s use beats and amplitude. For that, we’ll need to write a Python script that:

  1. Processes an MP3 file from a user-provided path.
  2. Estimates the tempo.
  3. Determines for each pixel, whether it falls on a beat in the song.
  4. Determines for each pixel, the average amplitude of the waveform at that pixel’s “time slice”.

Sounds like a lot, but librosa is going to do all the heavy lifting. First I’ll explain the different parts of the script, then I’ll include the whole file.

librosa makes it really easy to read and parse an MP3. The following will read in and parse an MP3 into the time series data and sample rate.

>>> import librosa
>>> time_series, sample_rate = librosa.load("brass_monkey.mp3") 
>>> time_series
array([ 5.8377805e-07, -8.7419551e-07,  1.3259771e-06, ...,
       -2.1545576e-01, -2.3902495e-01, -2.3631646e-01], dtype=float32)
>>> sample_rate
22050

I chose Brass Monkey by the Beastie Boys because I like it and it’s easy to lookup the BPM online for a well known song. According to the internet the song is 116 BPM. Let’s see what librosa says in our next code block where I show how to get the tempo of a song.

>>> onset_env = librosa.onset.onset_strength(time_series, sr=sample_rate)
>>> tempo = librosa.beat.tempo(onset_envelope=onset_env, sr=sample_rate)
>>> tempo[0]
117.45383523

Pretty spot-on! No need to test any other songs, librosa is going to work perfectly for what I need.

I know the size of the image is 512×512 which is 262,144 pixels in total, so we just need the song’s duration and then it’s simple division to get the amount of time each pixel will represent.

>>> duration = librosa.get_duration(filename="brass_monkey.mp3") 
>>> duration
158.6
>>> pixel_time = duration / (512 * 512)
>>> pixel_time
0.000605010986328125

So, the song is 158.6 seconds long and each pixel in a 512×512 image will account for about 0.0006 seconds of song. Note: It would have also been possible to get the song duration by dividing the length of the time series data by the sample rate:

>>> len(time_series) / sample_rate
158.58666666666667

Either is fine. The division above is more efficient since the song file doesn’t need to be opened a second time. I chose to go with the helper function for readability.

Now, for each pixel we need to:

  1. Determine if that pixel is on a beat or not
  2. Get the average amplitude for all samples that happen within that pixel’s time slice

We’re only missing 2 variables to achieve those goals; beats per second and samples per pixel. To get the beats per second we just divide the tempo by 60. To get the whole samples per pixel we round down the result of the number of samples divided by the number of pixels.

>>> import math 
>>> bps = tempo[0] / 60.0 
>>> bps
1.9575639204545456
>>> samples_per_pixel = math.floor(len(time_series) / (512 * 512))
>>> samples_per_pixel
13

So, we have about 1.9 beats per second and each pixel in the image represents 13 samples. So we’ll be taking the average of 13 samples for each pixel to get an amplitude at that pixel. I could have also chosen to use the max, min, median, or really anything, the average is just what I decided to use.

I’m relying on the song being long enough that it has enough samples so that samples_per_pixel is at least 1. If it’s 0 we’ll need to print an error and quit. That would mean the song doesn’t have enough data to make the image. Now we have everything we need to loop over each pixel and check if it’s a “beat pixel” and get the average amplitude of the waveform for the pixel’s time slice.

>>> import numpy as np
>>> beats = 0
>>> num_pixels = 512 * 512
>>> avg_amps = []
>>> for pixel_idx in range(num_pixels):
...     song_time = pixel_idx * pixel_time
...     song_time = math.floor(song_time)
...     if song_time and math.ceil(bps) % song_time == 0:
...         beats += 1
...     sample_idx = pixel_idx * samples_per_pixel
...     samps = time_series[sample_idx:sample_idx + samples_per_pixel]
...     avg_amplitude = np.array(samps ).mean()
...     avg_amps.append(avg_amplitude)
...
>>> print(f"Found {beats} pixels that land on a beat")
Found 3306 pixels that land on a beat
>>>

The full script with comments, error handling, a command line argument to specify the song file, and a plot to make sure we did the average amplitude correctly is below:

import os
import sys
import math
import argparse
import librosa
import numpy as np
import matplotlib.pyplot as plt


def main():
    """Entry Point."""
    parser = argparse.ArgumentParser("Analyze an MP3")
    parser.add_argument(
        "-f", "--filename", action="store",
        help="Path to an .mp3 file", required=True)
    args = parser.parse_args()

    # Input validation
    if not os.path.exists(args.filename) or \
       not os.path.isfile(args.filename) or \
       not args.filename.endswith(".mp3"):
        print("An .mp3 file is required.")
        sys.exit(1)

    # Get the song duration
    duration = librosa.get_duration(filename=args.filename)

    # Get the estimated tempo of the song
    time_series, sample_rate = librosa.load(args.filename)
    onset_env = librosa.onset.onset_strength(time_series, sr=sample_rate)
    tempo = librosa.beat.tempo(onset_envelope=onset_env, sr=sample_rate)
    bps = tempo / 60.0  # beats per second

    # The image I'm generating is going to be 512x512 (or 262,144) pixels.
    # So let's break the duration down so that each pixel represents some
    # amount of song time.
    num_pixels = 512 * 512
    pixel_time = duration / num_pixels
    samples_per_pixel = math.floor(len(time_series) / num_pixels)
    print(f"Each pixel represents {pixel_time} seconds of song")
    print(f"Each pixel represents {samples_per_pixel} samples of song")

    # Now I just need 2 more things
    # 1. a way to get "beat" or "no beat" for a given pixel
    # 2. a way to get the amplitude of the waveform for a given pixel
    beats = 0
    avg_amps = []
    for pixel_idx in range(num_pixels):
        song_time = pixel_idx * pixel_time

        # To figure out if it's a beat, let's just round and
        # see if it's evenly divisible
        song_time = math.floor(song_time)
        if song_time and math.ceil(bps) % song_time == 0:
            beats += 1

        # Now let's figure out the average amplitude of the
        # waveform for this pixel's time
        sample_idx = pixel_idx * samples_per_pixel
        samps = time_series[sample_idx:sample_idx + samples_per_pixel]
        avg_amplitude = np.array(samps ).mean()
        avg_amps.append(avg_amplitude)

    print(f"Found {beats} pixels that land on a beat")

    # Plot the average amplitudes and make sure it still looks
    # somewhat song-like
    xaxis = np.arange(0, num_pixels, 1)
    plt.plot(xaxis, np.array(avg_amps))
    plt.xlabel("Pixel index")
    plt.ylabel("Average Pixel Amplitude")
    plt.title(args.filename)
    plt.show()


if __name__ == "__main__":
    main()

First Attempt

Right now we have 2 prototype python files complete. They can be found in the prototypes folder of the GitHub repository for this project (or above). Now we have to merge those two files together and write some logic for deciding what color a pixel should be based on the song position for that pixel. We’ll be able to basically throw away the prints and graph plot from mp3_analyzer.py and just keep the math and pixel loop which we can modify into a helper method and then jam it into our red_square.py script. We will also want to do some more error handling and command line options.

We’ll start with the pixel loop from mp3_analyzer.py. Let’s convert it into a SongImage class that takes a song path and image resolution, and does all the math to store the constants we need (beats per second, samples per pixel, etc.). The SongImage class will have a helper function that takes a pixel index as input and returns a tuple with 3 items. The first item in the returned tuple will be a boolean for whether or not the provided index falls on a beat or not. The second item will be the average amplitude of the song for that index. Finally, the third item will be the timestamp for that pixel.

class SongImage:
    """An object to hold all the song info."""

    def __init__(self, filename: str, resolution: Tuple[int, int]):
        self.filename = filename
        self.resolution = resolution
        #: Total song length in seconds
        self.duration = librosa.get_duration(filename=self.filename)
        #: The time series data (amplitudes of the waveform) and the sample rate
        self.time_series, self.sample_rate = librosa.load(self.filename)
        #: An onset envelop is used to measure BPM
        onset_env = librosa.onset.onset_strength(self.time_series, sr=self.sample_rate)
        #: Measure the tempo (BPM)
        self.tempo = librosa.beat.tempo(onset_envelope=onset_env, sr=self.sample_rate)
        #: Convert to beats per second
        self.bps = self.tempo / 60.0
        #: Get the total number of pixels for the image
        self.num_pixels = self.resolution[0] * self.resolution[1]
        #: Get the amount of time each pixel will represent in seconds
        self.pixel_time = self.duration / self.num_pixels
        #: Get the number of whole samples each pixel represents
        self.samples_per_pixel = math.floor(len(self.time_series) / self.num_pixels)

        if not self.samples_per_pixel:
            raise NotEnoughSong(
                "Not enough song data to make an image "
                f"with resolution {self.resolution[0]}x{self.resolution[1]}")

    def get_info_at_pixel(self, pixel_idx: int) -> Tuple[bool, float, float]:
        """Get song info for the pixel at the provided pixel index."""
        beat = False
        song_time = pixel_idx * self.pixel_time

        # To figure out if it's a beat, let's just round and
        # see if it's evenly divisible
        song_time = math.floor(song_time)
        if song_time and math.ceil(self.bps) % song_time == 0:
            beat = True

        # Now let's figure out the average amplitude of the
        # waveform for this pixel's time
        sample_idx = pixel_idx * self.samples_per_pixel
        samps = self.time_series[sample_idx:sample_idx + self.samples_per_pixel]
        avg_amplitude = np.array(samps ).mean()
        return (beat, avg_amplitude, song_time)

Now we have a useful object. We can give it a song file and image resolution and then we can ask it (for each pixel) if that pixel is on a beat and what the average amplitude of the waveform is for that pixel. Now we have to apply that information to some algorithm that will generate an image. Spoiler alert, my first attempt didn’t go well. I will leave it here as a lesson in what not to do.

def generate_pixels(resolution: Tuple[int, int], song: SongImage) -> np.ndarray:
    """Generate pixels of an image with the provided resolution."""
    pixels = []

    pixel_idx = 0
    for _row in range(resolution[1]):
        cur_row = []
        for _col in range(resolution[0]):
            # This is where we pick our color information for the pixel
            beat, amp = song.get_info_at_pixel(pixel_idx)
            r = g = b = a = 0
            if beat and amp > 0:
                a = 255
            elif amp > 0:
                a = 125

            amp = abs(int(amp))

            # Randomly pick a primary color
            choice = random.choice([0, 1, 2])
            if choice == 0:
                r = amp
            elif choice == 1:
                g = amp
            else:
                b = amp

            cur_row.append((r, g, b, a))
            pixel_idx += 1

        pixels.append(cur_row)

    return np.array(pixels, dtype=np.uint8)

I used the function above to generate the image by choosing the pixel transparency on each beat and then used the amplitude for the pixel color. The result? Garbage!

Trash

Well that didn’t go well. The image looks terrible, but it does at least make sense. If I zoom in, there’s quite a bit of repetition due to the fact that we tied transparency to the BPM. It’s also not colorful because we used the amplitude without scaling it up at all, so we ended up with RGB values that are all very low. We could scale the amplitude up to make it more colorful. We could also shrink the image resolution to see if a smaller image is more interesting, then scale it up to 512×512 to use as an icon. Another nit-pic I have about this whole thing is that I still ended up using random which kind-of defeats the purpose of generating an image from a song. Ideally a song produces mostly the same image every time.

Another option: we could throw it away and try something different. I’m not going to completely throw it away, but I had an idea I’d like to try to make a more interesting image. Right now we’re iterating over the image one pixel at a time and then choosing a color and transparency value. Instead, let’s move a position around the image flipping it’s direction based on the song input. This will be somewhat like how 2D levels in video games are procedurally generated with a “random walker” (like this).

Side Tracked! “Random Walk” Image Generation

Let’s make something simple to start. We can modify the red_square.py script to generate an image with red lines randomly placed by a “random walker” (a position that moves in a direction and the direction randomly changes after a number of pixels).

def walk_pixels(pixels: np.ndarray):
    """Walk the image"""
    pos = Point(0, 0)
    direction = Point(1, 0)  # Start left-to-right

    for idx in range(WIDTH * HEIGHT):
        if idx % 50 == 0:
            # Choose a random direction
            direction = random.choice([
                Point(1, 0),   # Left-to-right
                Point(0, 1),   # Top-to-bottom
                Point(-1, 0),  # Right-to-left
                Point(0, -1),  # Bottom-to-top
                Point(1, 1),   # Left-to-right diaganal
                Point(-1, -1)  # Right-to-left diaganal
            ])

        pixels[pos.x][pos.y] = NEW_COLOR

        check_pos = Point(pos.x, pos.y)
        check_pos.x += direction.x
        check_pos.y += direction.y

        # Reflect if we hit a wall
        if check_pos.x >= WIDTH or check_pos.x < 0:
            direction.x *= -1
        if check_pos.y >= HEIGHT or check_pos.y < 0:
            direction.y *= -1

        pos.x += direction.x
        pos.y += direction.y

It works! We have red lines!

Red lines (512×512)

That’s a bit noisy though. Let’s see what we get from 32×32 and 64×64 by changing WIDTH and HEIGHT in the code above.

32×32
64×64

As we zoom in what we get is a bit more interesting and “logo-like”. Listen, I know it’s not gonna be a good logo, but I’ve written this much blog, I’m not about to admit this was a dumb idea. Instead I’m going to double down! One of the images that is produced by the end of this post will be the logo for this blog forever. I will never change it. We’re in it now! Buckle up! The full prototype “random walker” script can be found in the prototypes folder for the project on GitHub.

Back to business (wrap it up!)

To finish this up I just need to decide how the different features I’m pulling out of the song will drive the pixel walking code. Here’s what I’m thinking:

  1. The BPM will determine when the walker changes direction.
  2. The amplitude will have some effect on which direction we choose.
  3. We’ll use a solid pixel color for “no beat” and a different color for “beat”.
  4. We’ll loop to the other side of the image (like Asteroids) when we hit a wall.
  5. We’ll iterate as many pixels as there are in the image.

That seems simple enough and doesn’t require any randomness. We’ve basically already written all the code for this, it just needs to be combined, tweaked, and overengineered with a plugin system and about 100 different combinations of command line arguments to customize the resulting image. I’ll spare you all the overengineering, you’re free to browse the final project source code for that. For now I’ll just go over the key parts of what I’m calling the basic algorithm (my favorite one). NOTE: The code snippets in this section only contain the bits I felt were important to show and they do not have all the variable initializations and other code needed to run. See the finished project for full source.

Using the SongImage class from above, we provide a path to a song (from the command line) and a desired resulting image resolution (default 512×512):

# Process the song
song = SongImage(song_path, resolution)

Next, we modified our generate_pixels method from the red_square.py prototype to create a numpy.ndarray of transparent pixels (instead of red). As our walker walks around the image, the pixels will be changed from transparent to a color based on whether the pixel falls on a beat or not.

Finally, we implemented a basic algorithm loosely based on the rules above. In a loop from 0 to num_pixels we check the beat to set the color:

beat, amp, timestamp = song.get_info_at_pixel(idx)
if not beat and pixel == Color.transparent():
    pixels[pos.y][pos.x] = args.off_beat_color.as_tuple()
elif pixel == Color.transparent() or pixel == args.off_beat_color:
    pixels[pos.y][pos.x] = args.beat_color.as_tuple()

Then we turn 45 degrees clockwise if the amplitude (amp) is positive and 45 degrees counterclockwise if it’s negative (or 0). I added a little extra logic where, if the amplitude is more than the average amplitude for the entire song, the walker turns 90 additional degrees clockwise (or counterclockwise).

# Directions in order in 45 degree increments
directions = (
    Point(1, 0), Point(1, 1), Point(0, 1),
    Point(-1, 1), Point(-1, 0), Point(-1, -1),
    Point(0, -1), Point(1, -1)
)

# Try to choose a direction
if amp > 0:
    turn_amnt = 1
else:
    turn_amnt = -1

direction_idx += turn_amnt

# Turn more if it's above average
if amp > song.overall_avg_amplitude:
    direction_idx += 2
elif amp < (song.overall_avg_amplitude * -1):
    direction_idx -= 2

direction_idx = direction_idx % len(directions)

# Update the current direction
direction = directions[direction_idx]

Then we update the position of the walker to the next pixel in that direction. If we hit the edge of the image, we loop back around to the other side like the game Asteroids.

# Create a temporary copy of the current position to change
check_pos = Point(pos.x, pos.y)
check_pos.x += direction.x
check_pos.y += direction.y

# Wrap if we're outside the bounds
if check_pos.x >= resolution.x:
    pos.x = 0
elif check_pos.x < 0:
    pos.x = resolution.x - 1

if check_pos.y >= resolution.y:
    pos.y = 0
elif check_pos.y < 0:
    pos.y = resolution.y - 1

If you run all of that against “Brass Monkey” by the Beastie Boys you end up with the following images (I found higher resolutions looked better):

Brass Monkey 512×512
Brass Monkey 1024×1024
Brass Monkey 1920×1080

Gallery

Here’s a gallery of some of my favorites from some other songs. I changed the colors around to experiment with what looks best. I landed on matrix colors (green/black) because I’m a nerd and I don’t know how to art, with the exception of Symphony of Destruction by Megadeth, for that I used colors from the album artwork.

For the site icon, I decided to crop what looks like some sort of yelling monster with its arms waving in the air out of the 1920×1080 image generated from The Double Helix of Extinction by Between the Buried and Me.

Extra Credit – Make a live visualizer out of it

While looking at all these images I couldn’t help but wonder what part of different songs caused the walker to go a certain direction. So I decided to take a stab at writing a visualizer that would play the song while drawing the image in real time. My first attempt at it was to use the builtin Python Tk interface, tkinter, but that quickly got messy since I’m trying to set individual pixel values and I want to do it as quickly as possible. There are certainly ways I could have done this even better and more efficiently, but I decided to use Processing to get the job done. The final sketch can be found in the GitHub repository for this project under the mp3toimage_visualizer directory.

To start, I needed a way to get the image generation information over to Processing. To do that, I made a modification to the Python image generator to save off information for each pixel we set in the image. I had to save the position of the pixel, the color, and the timestamp in the song for when the pixel was changed:

pixel_changed = False
if not beat and pixel == Color.transparent():
    pixels[pos.y][pos.x] = args.off_beat_color.as_tuple()
    pixel_changed = True
elif pixel == Color.transparent() or pixel == args.off_beat_color:
    pixels[pos.y][pos.x] = args.beat_color.as_tuple()
    pixel_changed = True

if pixel_changed and pb_list is not None:
    pb_list.append(PlaybackItem(
        pos,
        Color.from_tuple(pixels[pos.y][pos.x]),
        timestamp,
        song.pixel_time))

One important optimization was skipping any pixel we already set. The walker does tons of back tracking over pixels that have already been set. It was so much that the code I wrote in Processing couldn’t keep up with the song. Skipping pixels already set was the easiest way to optimize.

Once I had the list of pixels and timestamps, I wrote them to a CSV file along with the original song file path and image resolution. After that, the Processing sketch was pretty simple. The most complicated parts, excluded here, were dealing with allowing user selection of the input file. The sketch reads in the CSV produced by the Python script and then draws the pixels based on the song’s playback position. The following snippet is from the draw method which is called once per frame.

if (pbItems.size() == 0) {
    return;
}

// Get the song's current playback position
float songPos = soundFile.position();

int count = 0;

// Get the first item
PlaybackItem item = pbItems.get(0);
while(pbItems.size() > 0 && item.should_pop(songPos)) {
    // Loop over each item who's timestamp is less than
    // or equal to the song's playback position (this is
    // more "close enough" than exact).
    item = pbItems.get(0);
    item.display();  // Display it
    pbItems.remove(0);  // Remove it from the list
    count++;
}
if (count >= 1000) {  // Over 1000 per frame and we fall way behind
    println("TOO MUCH DATA. Drawing is likely to fall behind.");
}

I think that about wraps it up. There’s a whole README and all that good stuff over in the GitHub repository for the project, so if you’re curious what one of your favorite songs looks like as an image, or if you want to mess around with the different algorithms and command line options that I didn’t go into here, go run it for yourself and let me know how it goes. Or don’t. #GreatJob!

References

]]>
https://seanlaplante.com/2021/10/27/creating-an-icon-from-a-song/feed/ 1