Your Audio Setup - Desktop Customization & Workflow
Users browsing this thread: 2 Guest(s)
|
|||
My audio system is built on jack and pulseaudio with ALSA, but it extends beyond just a single computer. It's a near complete integration of everything I own, and by that I mean that any device can use audio from anywhere. This part is built on Audinate's proprietary Dante network audio devices. The main linux computer is running a Digigram LX-Dante PCI-e card (supports 128 channels in/out but I'm using a fraction of those). This card is the main system device for jack. For example I usually have a stereo output that goes to HDMI but also to the Dante network. I have alsa_out running like this:
Code: /usr/bin/alsa_out -c 2 -j tv_speakers -d hdmi:CARD=NVidia,DEV=0 That handles the TV speakers. But I'm not just running that command every time I start my system; it's of course managed with systemd user units. This is where it starts to get unusual. Code: systemctl --user list-dependencies tv_speakers.target Yeah… so in order to run my tv speakers, I'm using a systemd target called tv_speakers.target, which starts alsa_out@nvidia.service to bind to the HDMI ALSA device. But it's also managing the connections to its jack client (called tv_speakers). There are two helper scripts, written in python: one to enforce connections/disconnections between two jack clients and their ports, and one to check that a client exists. There are absolutely no sleep or jack_connect shell commands in use here because I'm using jackdbus and watching for clients/ports and reacting to them as soon as something changes on the graph state. And then I went and added systemd template services for those clients and connections so I can declaratively script the state of my jack clients and connections. This opens up some interesting possibilities. I can now compose systemd services and targets that rely on jack connections or clients just as easily as I manage any other dependencies in user-level systemd services. The list-dependencies output from earlier should start to make sense now. I've got tv_speakers.target and it's starting the ALSA/jack device, but also pulling in an ecasound effects processor that uses LADSPA. The speaker_effects one is for extra dynamic range compression beyond what jack_effects.target does, which is nearly identical in purpose. But each step of the signal chain is set up with systemd so that it actually starts only what it needs, in the correct order. The systemd journal logs for the jack python scripts use custom fields to add context about the jack client in a way that I'll be able to filter on things like a jack client name or port later. The next thing I'm going to do is to make transient clients scripted in the same way using the .wants folder possibly. So I could start something like mednafen with a particular rom file and have its jack connections connected automatically (rather than what it does now blindly auto connecting to system ports 1 and 2). Most things are in systemd user units for me by now. I can start discord.target and it pulls in a jack_client@discord.service which pulls in the connections to at least have output. But I'm using the systemd targets to declare intent—what I want the system to set itself up to do. I either do this by starting targets/services directly or clicking buttons on home assistant on my phone. A week ago I played with making AI agent start/stop units from a prompt. The one I'm usually using is audio-full.target: Code: Requires = dante_pcie_in_notify.service I'm in the middle of refactoring from having the connections all defined and managed by some node.js scripts that used to handle watching dbus for jack clients and ports, and a few of these are still in use. And although I've been using it for years reliably, it's still a work in progress. There's also a multi-channel recording service that runs a fixed instance of ecasound, which is kind of lazy but because of it I have years worth of conversations recorded where my audio is isolated from others. I anticipated things like whisper (audio to text transcription with AI) years ago and have the raw data to mine these recordings eventually. I'm not using a USB headset. It's a full rack of studio grade equipment, including dedicated AD/DA into the Dante network, a managed switch, compressors, a hardware mixer, a wireless microphone receiver, wireless transmitter for IEMs. Each device is either always on or conditionally enabled using SNMP commands to a networked PDU. It powers on only what it needs to complete the signal chain for whatever I'm doing, and it's systemd targets and services all the way down. If I only want the microphone signal chain set up, I can just start microphone.target. The devices power on and get connected in a specific order using AFTER/BEFORE rules in the units to avoid any pops or static. Anything that can be started simultaneously will do that, everything else waits for dependencies. It checks that the devices are actually reachable on the network, or uses known delays for fully analog devices, before considering the device started. I'm using sd_notify in my scripts to know what a service is doing now, not just whether it started successfully or not. Code: systemctl --user list-dependencies microphone.target --plain Everything is in Dante, which is of course unusual for a home user. All of my other computers have an Audinate AVIO USB adapter for two channels in/out. Software-only solutions exist for network audio (I used NetJack in the past), but I wanted pro level reliability so I went with these. Any computer can use any microphone, all output from every source is mixed into my IEMs, which I then hear wirelessly on a Shure P10R+ bodypack. I'm still mad I didn't go with Lectrosonics for this, because then my IEMs would be encrypted. I have it set up so that I can switch between two separate final signal chains, but I rarely make use of it. For example, two people playing separate games in the same room could hear each other on a microphone but are hearing different game audio (with no speakers). The only problem with Dante (besides the cost, obviously), is their licensed API and MacOS/Windows-only GUI client for managing the devices and the connections. I did just enough reverse-engineering on their control protocol (wireshark while doing stuff on the GUI) to make a CLI tool to cover the basics: mDNS service discovery, basic device configuration, routing. Code: netaudio device list Code: netaudio subscription list | grep -iEv '(own signal|unresolved|no subscription)' Code: jack_lsp -c All the information and capabilities are there already, but I haven't made it into a proper daemon that can react to changes in the routing or device configuration. My Dante routing setup is pretty static right now because of it, but that's fine. But eventually I want this to be set up with systemd so that I can know for sure that my entire signal chain (hardware, jack, Dante) is actually correct. I've been at this step for a while and just haven't finished it. The microphone is a Shure SM7B on a Yellowtec M!ka mount. I have spares for both. I also have a wireless Shure lav mic (Shure AD4D receiver with an AD1 bodypack transmitter). I have a Bluetooth AVIO adapter to get audio in and out of my phone and steam deck, but I hate using Bluetooth for audio, obviously. The full signal chain is SM7B -> Rupert Neve Shelford Channel (channel strip) -> Ferrofish A32 Dante (AD/DA) -> (stuff in/out of Dante from everywhere and the AD4D wireless mic receiver) -> Ferrofish DA -> Neve Satellite 5059 (analog mixing to two stereo pairs output) -> Neve Portico 5043 (compressor; a single half-rack unit does stereo, I have two of these) -> Shure P10T for the final mix to be transmitted wirelessly to my Shure P10R+ for my Shure SE846 IEMs. The final effect is that every device uses both microphones and I can hear everything and still talk moving around my house wirelessly. Low latency. For HDMI I've got an HDFury VRROOM to extract audio from HDMI 2.1 sources (game consoles, macbooks, gaming pc, whatever). This device has IP control, which I've scripted so I can switch video inputs on systemd/home assistant. It has TOSLINK output into a splitter. One goes into the Ferrofish A32 Dante (as a backup, I don't want this device to be a hard requirement for this) and a Hosa TOSLINK to AES3 adapter into an Audinate AVIO AES3 adapter. The AVIO adapter is always on, so I don't need to start the Ferrofish unless the other device fails (which it has, I'm on the second one). So yeah, like $1000 to get HDMI 2.1 audio into Dante. There's more, but that's enough of an overview. More recently, I put a lot of effort into core and IRQ isolation to get the jack xrun count lower (only really became an issue when I started having lots of network traffic and a 10+ day BTRFS balance, which was enough to make it absolutely necessary to finally do). I even found a bug in pulseaudio that leads to an infinite loop. I tried to do setcap on it so it could renice itself but it kept trying to drop privileges thinking it was root because of it. I last wrote about this here: https://www.avsforum.com/threads/im-usin...e.3287663/ which has some photos before I moved and a bunch of hyperlinks to the gear involved. |
|||
Messages In This Thread |
Your Audio Setup - by venam - 08-02-2021, 03:37 AM
RE: Your Audio Setup - by movq - 09-02-2021, 01:06 PM
RE: Your Audio Setup - by venam - 09-02-2021, 01:30 PM
RE: Your Audio Setup - by movq - 10-02-2021, 11:33 AM
RE: Your Audio Setup - by venam - 10-02-2021, 11:58 AM
RE: Your Audio Setup - by venam - 05-07-2021, 03:29 PM
RE: Your Audio Setup - by VMS - 28-02-2022, 12:21 PM
RE: Your Audio Setup - by s00pcan - 30-04-2025, 03:43 PM
|