It is therefore quite curious to see people get all excited about a DAW on another "platform" where at least 90% of the plugins in the world are not available, and in all likelihood are even less likely to ever become available than they are on Linux.
There's certainly a role for a tool like this in education and for people who so far have no realized that they really need to have Pigments or fabfilter for their project. And yes, people do exaggerate the extent to which a specific plugin is needed. Nevertheless, the lack of ability to run essentially any of the existing 3rd party plugins would, were it a native DAW, be viewed as completely crippling.
The webaudio modules "standard" offers some hope here, and I suspect that within 2-5 years, plugin toolkits like JUCE will allow you to build not just as "windows/VST3" or "Linux/LV2" or "macOS/AudioUnit" formats, but also "wasm/webaudiomodule" (or something like it). However, given how easy the various Linux options already are with JUCE, and how few plugin developers choose to use them, I have to wonder if the massively larger size of a "browser platform market" would be enough to get them to add another platform.
That said, I think there's something interesting about building out an audio platform with "no VSTs" as a constraint - about 6 years ago I was convinced that the web was a deadend for even middling complexity audio projects when I saw Bandlab at NAMM, and I was very wrong. It seems like the value of a DAW that you can fire up in a browser and instantly access all your projects/share them with your friends is more valuable than having no plugins and crashing after hitting browser tab memory limits. And looking down the road it frees you from the serious problems with native plugins and current plugin APIs.
Creatively it was very freeing. Naturally, plugin envy eventually crept in and I was glad when they did add VST support, but I miss the ease of use and portability. And you got to know the stock effects inside and out which offered some streamlining in workflow.
I wrote and recorded a little song and published it withing three hours or so, just as an experiment.
I had to reload the UI a few times after moving too quickly and because of a janky internet connection, but other than that I thought it was a well designed tool. I think it's liberating to be shielded from the many choices you make when working in a "real" DAW, and when I don't have REAPER or Studio One around, I'll happily work with a tool like this to simply stay in the habit of producing music when on the road.
But... like other commenters - there it stops, and I'm just not quite sure why.
The audience is probably me. I'm an avid Ableton user - I pay a bloody fortune for it, I upgrade it every year, I am happy to support their development because it's an insanely - insanely - good piece of software that does everything I need it to do. I'm also now completely embedded in the clip view, so going back to a linear view just isn't a possibility for me.
More to the point though - this clearly isn't aimed at people who know nothing about what they're doing. It's very non-amateur and clearly very, very powerful. But at the same time it isn't aimed at me, either - as someone who does know what they're doing, I'm thinking "um, VSTs?" or "clip view?" or "live performance / latency issues?" or whatever.
So... who is the audience? Maybe there is a middle ground of people who don't have the means to fork out for a good desktop DAW. Maybe teenagers who are wanting to learn the principles without the spend. Maybe because it'd be very cool for collaborating? I just don't know.
Nonetheless, it's an insane demonstration of what can be done in a browser these days and for that I massively doff my cap - amazing work!
It actually demotivates me to work on music and motivates me to work on some web app ideas.
I am not sure who the audience is though either. Reaper works wonderfully on Linux.
The issue is any DAW or really any musical instrument is massive investment in time to learn to be good. The money isn't really the bottleneck. I can easily get a reasonably priced flute on ebay. The reason I don't play the flute is because the amount of time involved to learn to play it.
There's a huge divide between people who might play with this at home as a toy and those who would be able to work with professional musicians with it.
The latter group will have some very strict requirements around performance, latency and workflow.
Edit: and reliability
DSP based systems struggled a lot with IO in the late 90s until faster SATA drives became ubiquitous. Lots of them used SCSI or exotic hardware cards to deal with large track counts.
It did have a SCSI drive, but in 1999 I did not consider that "exotic", having been using them on various Unix workstations for more than a decade before that.
They do plan to have a "native wrapper like tauri" in the future. I've played around with node-web-audio-api for low latency multichannel for Electron, but it wasn't a great success. Mostly because Rust audio backends (and almost all audio backends in general) aren't very good in such usage.
The crux is you want everything to play in sync when doing recording and overdubbing, e.g. "hit record and what I hear live is in sync with what I have recorded already". Almost all DAWs solve this by just starting things a bit later (latency compensation). Some audio cards solve this by allowing direct hardware monitoring. But even then you will have some samples of latency.
most plug-ins don't add latency ?
> There is no way eliminate CPU cycles being spent on whatever the plugin does.
that's not how DAW works, they don't output audio immediately anyways, everything is buffered at the driver level or just above so that there's always 1 or 2 buffers of delay between the input and the output. There's no difference in terms of latency between putting one or 50 bitcrushers in series for instance because in the end the audio main loop looks like (in a very simplified way):
void process_soundcard_buffers(float** in, float** out, int samples) {
for(int i = 0; i < samples; i++) // samples would usually be the buffer size you set in your audio config
out[0][0] = f(in[0][0]);
}
where f can be one distortion, or 50 distortions - what matters is that their processing time is < than the time you have to write the output and if so you'll always get the same fixed latency. (And if not so, you don't get delayed output, you get crackles because the soundcard will be reading its buffer whether you wrote something meaningful in it or not)Lastly, some fancy interfaces have built in DSP, so that you can load your effects right in the interface, for when you want effects in your recording monitor feed...
So I dont think latency is that critical anymore, and with a decent interface its mostly sorted out.
EDIT: The concerns here are primarily with input latency. Between plucking a string and heading it in your monitor it has to go through: your input hardware, the USB interface, the OS, the browser (which doesn't have explicit low latency capabilities), and JS. Most platforms support ASIO which is a low-level driver for reading audio data from devices. About as close to reading the ADCs yourself. Without a low-latency driver working with the OS there's so much latency overhead it's audible.
Jitter is a much bigger problem.
AFAIK ASIO is windows only.
macOS has proper audio APIs out of the box, and arguably since the introduction of WASAPI exclusive mode and WaveRT in Windows Vista, Windows has all the needed tools as well. But most of the more "professional" DAW products (in particular those by Steinberg, the author of ASIO) seem to ignore the existence of those. REAPER is one of the exceptions. Even WASAPI shared mode latency is really usable (below 30ms), but not low enough for tightly synchronized real-time recording.
Linux audio can be set up to provide low-latency audio as well, but I cannot comment on the details there as I'm not using it for that purpose.
Checkmate ;) j/k
You're right, it's not natively on Linux, and you wouldn't use it on Linux today since the kernel supports lower latency IO and has better scheduling. Jack has gotten so much better. We didn't have that at the time and I was desperate to use the only interface card I had.
That said, there are plenty of open source implementations of ASIO drivers now that aren't hardware tied.
Actually you absolutely would use it, in the same way you did back then.
WineASIO is a layer that allows a Wine application to use the ASIO API. Since ASIO is not a part of Windows itself, anything that wants to use ASIO can't do so on "bare Wine", and Wine doesn't allow for the installation of a windows kernel driver layer like ASIO. Hence: WineASIO - an implementation of ASIO for use by Windows applications running inside Wine.
Also, Ubuntu 14 dates to 2014; JACK dates back to 2002. Very little, if anything has changed about JACK since 2014. AFAIR, WineASIO could or did use JACK itself at some point in its development history, since it was a pretty natural fit.
I don't know of any open source ASIO implementations. The only 3rd party one of, ASIO4ALL, is not open source. Then again, I don't track the Windows environment much at all.
You're right, Jack existed. I remember struggling to get it working though. Oh well, I'm quite a long ways form that career though. Rusty skills.
You cannot be performing to audio that you are hearing with any delay, especially if the monitoring of the live audio is also being routed through software.
At a certain point of latency it introduced delays and badly affects how you perform. In some circumstances it makes performance actually impossible.
There are ways around this, namely if the software knows exactly what the input and output latency is then the playback and recording can be compensated. For live monitoring though you really need that done in the audio hardware itself in hard real time.
The reasons are things like, if you want to play in time with a previously recorded track, or if you are using digital effects and need to be able to hear their effect on your instrument as you play it.
Both of these are less of an issue today than they were 15 years ago where a USB 2.0 audio interface added significant delay into audio and made it harder to get what you wanted out of the system.
Otherwise latency is not really a problem.
This is absolutely not true. Latency in audio systems is important in almost every aspect of using a DAW
but that doesn't allow me to have my (software) effects chain when I record ? for a guitar solo where i'm going to play with say, delays and whammys that's a no-go
That’s why I really like the idea of building a DAW in the browser, it has huge potential for all kinds of users, especially in education, whether for kids, older people, or just anyone who wants to make music on the go, no matter what device they’re using.
I see a lot of promise in this project and fully support André, who has already contributed to developing great audio tools.
edit: I like the idea of the "Discoverable Toys" and can see how this could develop into something new. But why not just concentrate on that and bring it to other DAWs in form of a plugin, instead of writing a whole new DAW in the browser?
> Will the DAW be open source from the start or only become open source later?
> To make the most of being open-source, we believe that there should be an appropriate infrastructure for documentation and quality assessment of code contributions. Our current focus is to lay the foundation for an MVP and release a public standalone version 1 by the end of the year.
So it seems that they intend to open source it later. Still a bit of a strange move, but fair enough I guess.
Ease of use. Login without installing anything, work on a project on the tabled on the train, later on continue the project without syncing it somehow, all while working in collaboration with a friend.
Also the open source audio editor situation is quite dismal. Audacity is really the only game in town. It's showing its age too and its trajectory doesn't look great. A more editing-focused DAW, which OpenDAW seems to be, would be very welcome.
This is featherweight in comparison, and a lot closer to a traditional DAW than audiotool's skeuomorphic virtual analogue approach.
It is an audio/video/midi plugin standard for the web and it is rather mature.
During covid I worked on a collaborative browser-based DAW, https://sequencer.party. I definitely bit off more than I could chew, but you can wire up plugin chains at least.
I would strongly suggest you consider adding webaudiomodule support and instantly get ~50 plugins supported in the DAW. I also packaged up a bunch of them ready for consumption here: https://github.com/boourns/wam-community
"Extended methods in iterators" sounds like a developer-experience quality of life feature that could be easily avoided.
Still, I'm happy to see that this seems to work in Firefox, so it's not Chrome-only.
But it makes me question why "the browser" is apparently still the inevitable platform of the future.
In order for a PWA to be normal and usable, it must be available offline, open in a window without browser chrome, have similar performance to a native application, be launchable via a shortcut on the host OS, and respond to the mouse and keybaord shortcuts the way you'd expect. I think I've just described... an Electron app?
It's cool that this kind of thing can run in a web browser. With no install hurdle, it's much easier to convince people to try it out, and it's cross platform. Beyond that I can't really think of any advantages to having it run in the browser.
If what's lacking is an easy way to try software, I can't help but imagine lots of ways this could be addressed that would be much more pleasant to use than loading PWAs. Right now I can't seriously see myself enjoying using a PWA for work.
I say this having recently finished several large design projects in Figma, which is apparently a gold standard success story for browser based apps. Despite the years of development and herculean engineering efforts, I can still feel the browser jank. I begrudgingly open the thing in chrome, as it completely chokes in Firefox. It still chokes on moderately sized canvases, moving things is slow and laggy compared to native apps, keyboard shortcuts sometimes don't work or keys get stuck in a weird pressed or unpressed state, loading is slow, elements pop-in over tens of seconds.
I know I'm an old man yelling at clouds at this point, I'm just disappointed that we seem to be going backwards in performance and usability of software.
You've just described a PWA. You can install them as a host OS shortcut, they run without browser chrome, should have performance equal to Electron.
Also if you really want extra bloat and faff, any PWA is trivial to turn into an Electron app.
Most of PWA criticism is based on misunderstanding PWA capabilities.
I admit I didn't realize creating a shortcut to a PWA was already supported as it's not pushed very hard. In Chromium it's buried under dots, "Cast, Save and Share" (which is a bizarre mashup of disparate functions), and finally "Install page as app".
The window that loads still has browser chrome, in the form of a back, refresh, and three dots button. As soon as you navigate somewhere, even within the same app, the url bar appears again, but you can't edit it. It seems that to be able to always hide this bar, you'd need a way to differentiate between "internal" links that should navigate within the page, and external links that should open in a browser.
I tried turning off my internet, and neither figma nor openDAW showed anything more than a blank page, which confirms my feeling of uncertainty around PWAs, namely, how do you know what will actually work offline. It feels fragile, like if I reset my browser or my clear my cache, my installed applications will disappear. I'm not sure I'm comfortable with the blurring of the lines between bookmarks and installed applications.
All this is of course addressable with a lot more infrastructure and work from browser and OS makers. To me it seems like a lot of development to end up with something that behaves a lot like Electron, with the added easteregg of being able to access applications in a browser, without intending that anybody actually do so.
PWAs can run fully offline using service workers.
https://developer.mozilla.org/en-US/docs/Web/Progressive_web...
> In Chromium it's buried under dots, "Cast, Save and Share" (which is a bizarre mashup of disparate functions), and finally "Install page as app".
Chromium supports prompting the user to install the app. There's also an icon in the address bar if the page has an app manifest.
https://khmyznikov.com/pwa-install/
https://developer.mozilla.org/en-US/docs/Web/Progressive_web...
> The window that loads still has browser chrome, in the form of a back, refresh, and three dots button.
The window appearance and behavior can be changed using the app manifest, although getting rid of the three dots may be impossible in some platforms.
https://developer.mozilla.org/en-US/docs/Web/Progressive_web...
> I tried turning off my internet, and neither figma nor openDAW showed anything more than a blank page, which confirms my feeling of uncertainty around PWAs, namely, how do you know what will actually work offline.
Figma nor OpenDAW don't seem to be configured as offline PWAs.
> All this is of course addressable with a lot more infrastructure and work from browser and OS makers.
The problems you encountered are mostly solved in PWA APIs already, at least for Chromium based browsers. There is some variation in some features between browsers and OSs (Safari and iOS are particularly bad).
> To me it seems like a lot of development to end up with something that behaves a lot like Electron, with the added easteregg of being able to access applications in a browser, without intending that anybody actually do so.
PWAs are e.g. easier to install, have smaller footprint, are more portable and are a lot more secure. Not sure what you mean by "accessing applications in a browser". PWAs can't access anything a normal website can't.
Because it doesn't require trust. The browser actually got the permission model (almost - but there are extensions) right. I can safely open this and not worry about security.
At Open Music Networks (OMN), we’re taking a different approach. We’re building a simple, more accessible DAW that lives in your browser and integrates with our new music co-creation platform. Connectivity with more powerful client-based DAWs is on the roadmap.
If you’d like to discuss or collaborate, feel free to email me at david@openmusic.io.
How could we get in touch with your team?
Does it support plugins?
> If you want to help programming please be patient and wait for openDAW to be fully open-source. We will communicate this step loud and clear. Until then, we appreciate any feedback, testing and suggestions. Please log into our Discord server.
> Yes, the offline native app will have VST support at some point in future.
Source: https://opendaw.org (FAQ)
That kind of logic gives me the impression they may not ever open-source it if they get successful enough to sell it somehow (and, if it flops, it's a coin-flip as to whether or not they get around to open-sourcing it before it disappears).
I say this as someone who makes music and records it on a PC (MacOs/Windows/Linux), AND as someone who makes software for those same OSes. Admittedly, I do not really mess with loops or synthesizers, so I acknowledge those use-cases as some that might actually seem reasonable with current DAWs, but I definitely do not "get" it. I get bored screwing with synthesizers/filters (funny noise machines), and I use loops mostly with simple sequencers. So most of my time is spent producing and managing waveforms. To that end, every DAW looks - to me - like a god damned file manager, rather than a space for making content.
I'd LOVE for one piece of software to treat me like a user, rather than an audio engineer. I need a timeline, sure, but FIRST I need to pick an instrument; either by plugging it in (and the software auto-recognizing it), or by selecting a synth. I also need to pick a controller, if it's a synth. THEN I need to be put into an area where I can immediately get feedback for that thing. I don't need it to ONLY play when I hit record, or when I'm logging to the timeline. I need to have an empty space where I can start doing "takes". Simple snippets that I can refer back to. Auto-split during "silence", so I don't have to scan through a massive timeline to find the bit I liked. Obviously the mixers and things need to be summonable, here, for tuning. But they don't necessarily need to already be present. I don't need 18 knobs for tuning while I'm scritching out a riff, or finding the melodic line with my voice. I need to be able to try a thing, edit the settings, try again, edit again, back and forth until I feel like I'm "here in the space".
Again, this is like...every recording studio I've ever been to. You take some time to get your gear set up and, while that's happening, you play the things and find your sound in this space. Yet every piece of audio software just pretends like all of their audio processing isn't a change to the "space". It treats audio input a kind of "pure" input which it will alter, but doesn't immediately let musicians get a feeling for that alteration. Instead, we get infinite complexity right up front because "that's how computers work" or "that's how the files are handled" or "it's based on older stuff that had such limited processing this was the only way it could be done; now people are used to it, so we can't change it".
All nonsense. I'm not asking for every DAW to be geared towards musicians, I'm asking for ONE. Let ProTools still be ProTools. Or Audacity still be Audacity. But I'd really love if someone could make software for a 6 year old to plug a guitar into and start playing.
*yes, I am in a position to make that kind of DAW, and yes I do have the requisite insight to build the thing I'm asking for. And, boy, if I ever get the time, it's on. But I won't be holding my breath for my other projects to clear out enough to make this happen.
Ableton Live at least is a phenomenally powerful tool for arranging and recording music in a way that is completely elastic and transformative. I’ve been using it for over twenty years to make records and every time I’ve wanted it to do more, somehow those features have been magically added on every version without me having to ask.
The thing with any tool is that one has to know what result one wants rather than focussing on the method.
Your post reads like you should maybe think about doing creative things other than music. And I’m sure we’d be all ears for the improved DAW that you never got round to writing yet.
Apple also made that tool for six-year-olds and their guitars. It’s called GarageBand. Some of those six-year-olds have made albums with it.
My post reads like a person who has tried every DAW I know of and ended up with at least 10 steps (usually involving "sign up" and "log in" for no fucking reason), when there should only be 3: Plug in, open app, pick guitar.
I've mastered enough CONTENT to know that Ableton is great! Not just at MUSIC but at sound effects, at atmospheric/environmental engineering, too. But Ableton is still an engineer's software, not a musicians.
Again, I'm not complaining that the good things work good at what they're good at. I'm complaining that no one seems to be working on different good stuff for different kinds of people.
The difficulty is essential: compared to plugging the guitar into an analog amplifier, you have a minimum of three devices (ADC with guitar-compatible input, computer including DAC, speakers), the user interface is virtualized, you need to find and install the right software (including device drivers).
So your kid with an electric guitar should either use analog amplifiers and effects, or borrow a good setup from some "engineer", or take advantage of their youthful stubbornness and enthusiasm to learn enough about the technical aspects you dislike.
Whether the people that made Ableton are musicians or not, or whether you're a musician, or not, is all irrelevant. They made software that is a thin layer over the literal functions of the machine because that's how all DAWs are made. "Don't fix what ain't broke." It's a great way to run a business. You learned their way of working, either from the software itself, or from other DAWs, and you like it. Great! Literally every musician for the past 2 decades has come to work with these DAWs, so it's not like working in a format for audio engineers is incompatible with being a musician. I'm quite certain it helps, actually.
But, again, none of that speaks to the complexity of using the software as someone who doesn't know anything about DAWs or modern audio software. I know a guy who can play the guitar beautifully; yet he can't figure out Ableton. I've sat him down multiple times and showed him some really cool stuff that he really liked, but he has never once not called me when he tried to use the software. Some people just aren't built like that. He is thinking about music in one form, and the computer wants him to formalize it in some different form. And he just can't or won't make the bridge.
And you may say: "well, fuck him; he should learn Ableton or be doomed to live without its benefits". More power to you. But I just wish there were something ELSE that would speak to him in his language.
In the days before DAWs, the role of musician and engineer were both critical for recorded music, but also mostly distinct. DAWs have changed that fundamentally because there's very little capital outlay required for the engineer's tools these days.
We've seen this over the years in Ardour. A musician wants a big red button and an application that will find their device signal all by itself. Two weeks later they want to use an EQ. Two months later they want to be to arrange whole sections. Two years later, and they don't like the pan power law and have some criticisms of the automation interpolation.
There's a bunch of stuff like that happening in that industry as the equipment evolves. I've been doing software/networking/hardware for coming up on 30 years now. My wife's been doing mostly live audio/production with some recording for about the same amount of time. Starting about 10 years ago she had to start learning about networking. It started simple with things like setting up routers/WiFi access points for controlling consoles with iPads. Now with AES-67 she's had to learn a whole bunch about RTP, PTP, QoS, VLANs, DHCP, subnetting, etc. It's working out well for her, she's incredibly sharp and many of her peers come to her for advice/consultation when things aren't working right, but it was definitely not something she expected to need to learn. When everyone's stumped they give me a call... I don't know much at all about the audio side of it but a little bit of Wireshark can usually explain what's broken in their systems.
Some have developed much further though to support a more digital-first approach.
But it's true that the barrier to entry can still be very high. Trying to explain any of these packages to a musician who is not also a computer power user is extremely challenging, believe me I've tried.
If we could arrive at a point where a DAW can be intuitive to a musician and not technically overwhelming that would be very interesting.
What would be more interesting though would be if that same project could be viewed in an "engineer mode" which exposes the technical view for someone else to work on at a different level.
As far as the "engineer mode" that's what I think galls me most: You can't really write audio software without all of the technical stuff so you're going to NEED that stuff anyway. AND, as someone matures in their musical ability, they often need to do more specific fine-tuning which would require those features. And that means that you could basically funnel non-audio-engineers into understanding at least the parts they need to make their own music when the time came. There's no better way to learn than to solve a "problem", even if that "problem" is just "how do I tighten up the high end on this so it makes this cool sound I want?"
In short: making a DAW for musicians is not only accessible to non-audio-engineers, it's also a gateway drug to semi-audio-engineers and their explorations. I'm just all for that!
If the software was primarily driven by a command list back-end, had a bunch of semi-preset solutions to common problems, and also could be "spoken to" - would that feel more comfortable for our musician user?
Still though, all of that is a layer AFTER that initial barrier to entry.
Even this is still a problem, because it's unlikely they know even what question to ask. Or if a sensible question is asked it may be an XY problem, where what is really intended is not what is asked.
Having thought about this for the last few minutes, it does seem inevitable that the software would have to start coaching the musician in the ways of the engineering and of "music software" people, so that the inputs become more accurate and aligned with the outcomes the software is capable of providing.
I think everyone would crave becoming more productive in the environment over time and not have to suffer the initial baby steps forever.
It's very difficult to imagine a DAW environment which exposes deeper functionality that is not already like a lot of the existing packages.
Edit: and one final thought - it's a hard environment to build by the nature of the work being done being a creative process with no correct answers and which needs to support a multitude of different approaches to creativity. It's pretty opposed to software being generally a machine with a fixed number of functions
It's called MainStage, and it's an industry standard tool (although recording engineers probably never use it)
The DAW has to cover so many different use cases and styles that I don't see how you can get around the complexity upfront.
A 6 year old could plug in a guitar and record the audio with any editing software. Once you want multiple tracks though and real time effects on those tracks and midi then you are already at the flavor of most DAWs.