All the time now for IO-bound workloads. It was very awkward a few years ago, but things like asyncio.run(), asyncio.gather(), asyncio.to_thread(), and asyncio.create_task() have made things less awkward.
TaskGroups should make things even less awkward, too.
It can use less resources and can be faster than passing data between processes. There's really no reason to use multiple processes for IO-bound workloads, either, and even for some IO-bound workloads, a thread pool can be faster than processes.
The second you aren't just passing raw bytes around, you have to take into consideration what can and can't be sent between processes in Python, as some objects can't be pickled and thus can't be passed between processes.
You can concurrently load a lot more coroutines than processes and threads, as well.
If your task is io bound, aka, lots of network stuff, multi process is overkill. Also, asyncio can handle a _lot_ more tasks 100,000's as opposed to 10s. So it really shines in heavy io things.
Also, multiprocessing can not share memory, and that can be a pretty busy g disadvantage depending on the task.
One reason is that you often don't need to use locks. Between lines containing await (or async for or async with), you can be sure that this task won't be pre-empted to run another async task.
Another reason, if you're using the Trio async library, is that managing and cancelling multiple tasks is really easy, and you can be sure that none get lost. This update to Python brings some of that to core asyncio (but I'll stick with Trio for now, thanks).
It is, and it's a subinterpreters thing. Node has separate interpreters that run in worker threads but also share memory, which is the route Python is planning on taking.
One reason is if you need to launch a subprocess with a timeout but don't want to use up CPU in the python script while that subprocess runs. The regular subprocess module will busy-loop in such cases, consuming CPU, while asyncio's does not.
The docs even warn about this for subprocess and suggest using asyncio to avoid it, although the docs are misleading - it only busy-loops if the timeout is not None, and only when running on Mac/Linux not Windows.
Multiprocessing provides parallelism up to what the machine supports, but no additional degree of concurrency, asyncio provides a fairly high degree of concurrency, but no parallelism.
Asyncio actually plays really nicely with multiprocessing, too. The concurrent.futures.ProcessPoolExecutor can handle running tasks in child processes and handles the communication seemlessly for you. I've used it quite a bit. Can easily use all 32 cores on my server this way without the GIL getting in the way.
Single-threaded execution of IO-bound work can be faster than breaking it up between threads or processes, and it can use a lot less resources. Then there are the preemptive vs cooperative multitasking concerns and the pros and cons of processes/threads vs light-weight threads/coroutines/etc.
Some IO-bound workloads are suited really well by the asyncio model, while other workloads might be better suited for processes and threads. They're three separate tools whose use cases might be similar, but they're not necessarily replacing one another. Multiple processes still have their place even while asyncio exists and vice versa.
Explicit (await) vs. implicit (anything that uses patched I/O deep down) switch. Essentially, it makes the reasoning about the code almost as hard as with preemptive threading.
TaskGroups should make things even less awkward, too.