1. Inside the `server`, we create the broker's channel and `task`.
2. Inside `client`, we need to wrap `TcpStream` into an `Arc`, to be able to share it with the `client_writer`.
1. Inside the `accept_loop`, we create the broker's channel and `task`.
2. Inside `connection_loop`, we need to wrap `TcpStream` into an `Arc`, to be able to share it with the `connection_writer_loop`.
3. On login, we notify the broker.
Note that we `.unwrap` on send: broker should outlive all the clients and if that's not the case the broker probably panicked, so we can escalate the panic as well.
4. Similarly, we forward parsed messages to the broker, assuming that it is alive.
So how do we make sure that messages read in `client` flow into the relevant `client_writer`?
So how do we make sure that messages read in `connection_loop` flow into the relevant `connection_writer_loop`?
We should somehow maintain an `peers: HashMap<String, Sender<String>>` map which allows a client to find destination channels.
However, this map would be a bit of shared mutable state, so we'll have to wrap an `RwLock` over it and answer tough questions of what should happen if the client joins at the same moment as it receives a message.
@ -28,7 +28,7 @@ The order of events "Bob sends message to Alice" and "Alice joins" is determined
2. We pass the shutdown channel to the writer task
3. In the reader, we create a `_shutdown_sender` whose only purpose is to get dropped.
In the `client_writer`, we now need to choose between shutdown and message channels.
In the `connection_writer_loop`, we now need to choose between shutdown and message channels.
We use the `select` macro for this purpose:
```rust,edition2018
@ -84,7 +84,7 @@ use futures_util::{select, FutureExt, StreamExt};
# #[derive(Debug)]
# enum Void {} // 1
async fn client_writer(
async fn connection_writer_loop(
messages: &mut Receiver<String>,
stream: Arc<TcpStream>,
shutdown: Receiver<Void>, // 1
@ -112,7 +112,7 @@ async fn client_writer(
2. Because of `select`, we can't use a `while let` loop, so we desugar it further into a `loop`.
3. In the shutdown case we use `match void {}` as a statically-checked `unreachable!()`.
Another problem is that between the moment we detect disconnection in `client_writer` and the moment when we actually remove the peer from the `peers` map, new messages might be pushed into the peer's channel.
Another problem is that between the moment we detect disconnection in `connection_writer_loop` and the moment when we actually remove the peer from the `peers` map, new messages might be pushed into the peer's channel.
To not lose these messages completely, we'll return the messages channel back to the broker.
This also allows us to establish a useful invariant that the message channel strictly outlives the peer in the `peers` map, and makes the broker itself infailable.
One serious problem in the above solution is that, while we correctly propagate errors in the `client`, we just drop the error on the floor afterwards!
One serious problem in the above solution is that, while we correctly propagate errors in the `connection_loop`, we just drop the error on the floor afterwards!
That is, `task::spawn` does not return an error immediately (it can't, it needs to run the future to completion first), only after it is joined.
We can "fix" it by waiting for the task to be joined, like this:
@ -83,7 +83,7 @@ We can "fix" it by waiting for the task to be joined, like this:
#
# type Result<T> = std::result::Result<T,Box<dynstd::error::Error+Send+Sync>>;