The CEO would trust the personal assistant to do this if they have a deep trust in the assistance competence. They would also need to know the assistant has a deep enough understanding of the their preferences to not do something they don't like. AI can mirror that.
More importantly though there will be consequences if the human assistant makes a big mistake and books the wrong flight. They would have to take responsibility for the mistake.
The LLM is always just going to write in text it is sorry if it makes a mistake. That is never going to be good enough for anything of consequence. The LLM would practically have to be omniscient in a way that is not going to be possible in a world filled with uncertainty.
So much of human activity is built around the network of trust that another human takes the blame if something goes wrong. So much activity involves coin flips and that someone takes the blame when the coin lands on heads but we bet on tails.
The CEO would trust the personal assistant to do this if they have a deep trust in the assistance competence. They would also need to know the assistant has a deep enough understanding of the their preferences to not do something they don't like. AI can mirror that.
More importantly though there will be consequences if the human assistant makes a big mistake and books the wrong flight. They would have to take responsibility for the mistake.
The LLM is always just going to write in text it is sorry if it makes a mistake. That is never going to be good enough for anything of consequence. The LLM would practically have to be omniscient in a way that is not going to be possible in a world filled with uncertainty.
So much of human activity is built around the network of trust that another human takes the blame if something goes wrong. So much activity involves coin flips and that someone takes the blame when the coin lands on heads but we bet on tails.