* Implement async_std::sync::Condvar
Part of #217
* More rigourous detection of notification for condvar
* Use state of Waker instead of AtomicUsize to keep track of if task was
notified.
* Add test for notify_all
* Implement wait_timeout_until
And add warnings about spurious wakeups to wait and wait_timeout
* Use WakerSet for Condvar
This should also address concerns about spurious wakeups.
* Add test for wait_timeout with no lock held
* Add comments describing AwaitNotify struct
And remove an unnneded comment in a Debug implementation
Even if we do not make use of the progress blocking, we do need to make use of the dynamic restarting of machines as far as I understand.
Keeps the perf, while removing the regression from #747
At the moment it's not clear when and why recv returns Option<T>,
instead of just T. This changed comment makes it clear that None will
only be returned once no data will ever be sent again (i.e. after all
senders are gone).
* Use non-blocking connect for TcpStream.
Instead of spawning a background thread which is unaware of any timeouts
but continues to run until the TCP stack decides that the remote is not
reachable we use mio's non-blocking connect.
mio's `TcpStream::connect` returns immediately but the actual connection
is usually just in progress and we have to be sure the socket is
writeable before we can consider the connection as established.
* Add Watcher::{poll_read_ready, poll_write_ready}.
Following a suggestion of @stjepang we offer methods to check for
read/write readiness of a `Watcher` instead of the previous approach to
accept a set of `Waker`s when registering an event source. The changes
relative to master are smaller and both methods look more useful in
other contexts. Also the code is more robust w.r.t. wakeups of the
`Waker` from clones outside the `Reactor`.
I am not sure if we need to add protection mechanisms against spurious
wakeups from mio. Currently we treat the `Poll::Ready(())` of
`Watcher::poll_write_ready` as proof that the non-blocking connect has
finished, but if the event from mio was a spurious one, it might still
be ongoing.
This was previously discussed in #605 and others as a source of high
CPU load when sleeping tasks because of the overhead created by
retrying a future in short succession.
The BufWriter docs inaccurately stated that it flushes on drop, which it does
not do. This PR changes the docs, as well as the example, to highlight that
the user must explicitly flush a bufwriter.
There were also two places where the BufWriter code referred to it as a
BufReader: in the link to the std docs, and in the Debug output. Those have
also been fixed.
Moves the point of adding error context to the net::addr module so that
we have access to the raw address input and can include it in the error
message.
* tutorial/receiving_messages: fix future output type bound
* tutorial/receiving_messages: remove unneeded message trimming
Trimming was done twice on messages, so one of the two instances can
be removed. I personally think removing the first instance, in which
we are splitting names from messages makes the code more readable
than removing the second instance, but other examples further in
the tutorial show the second instance removed.
* tutorial/receiving_messages: declare use of TcpStream and io::BufReader
Readers couldn't see the `use` lines corresponding to these two
structures.
* tutorial/connecting_readers_and_writers: typos and grammar fixes
* tutorial/all_together: remove unneeded use async_std::io
* tutorial: use SinkExt consistently from futures::sink::SinkExt
* tutorial/handling_disconnection: hide mpsc use clause and remove empty lines
The empty lines translate to the output making it look weird.
* tutorial/handling_disconnection: fix typos
* tutorial/handling_disconnection: use ? in broker_handle.await
We were happy to return an Err variant from the broker_handle before
and nothing has changed in this regard, so bubbling it up to run().
This adds a new "verbose-errors" feature flag to async-std that enables
wrapping certain errors in structures with more context. As an example,
we use it in `fs::File::{open,create}` to add the given path to the
error message (something that is lacking in std to annoyance of many).
* Stream::merge does not end prematurely if one stream is delayed
* `cargo test` without features works
* Stream::merge works correctly for unfused streams
* Add a small example for listening to both ipv4 and ipv6
Presenting stream merge on Incoming.
* Change stable checks workflow to not cover examples, but tests
* Add utility type Registry to the sync module
* Remove unused import
* Split unregister into complete and cancel
* Refactoring and renaming
* Split remove() into complete() and cancel()
* Rename to WakerSet
* Ignore clippy warning
* Ignore another clippy warning
* Use stronger SeqCst ordering
`once_cell` provides a neat way of initializing lazy singletons without
macro. This PR use `sync::Lazy` to streamline same pattern proposed in
related rust RFC.
Resolve#406
These are the stream equivalents to `std::iter::Iterator::sum()` and
`std::iter::Iterator::product()`.
Note that this changeset tweaks the `Stream::Sum` and `Stream::Product`
traits a little: rather than returning a generic future `F`, they return
a pinned, boxed, `Future` trait object now. This is in line with other
traits that return a future, e.g. `FromStream`.
* add stream::LastFuture (not compiling)
Struggling with the associated type, pinning and how to move/copy
LastFuture.last.
* fix type signature -> still cannot assign
still problems assigning the new value to self.last
* remove unused bound
* add doctest
* unpin LastFuture.last
* RustFmt
* add static lifetime
* remove redundant lifetime
- Ensure the reactor is running for sockets and timers ([#819](https://github.com/async-rs/async-std/pull/819)).
- Avoid excessive polling in `flatten` and `flat_map` ([#701](https://github.com/async-rs/async-std/pull/701))
# [1.6.1] - 2020-06-11
## Added
- Added `tokio02` feature flag, to allow compatability usage with tokio@0.2 ([#804](https://github.com/async-rs/async-std/pull/804)).
## Changed
- Removed unstable `stdio` lock methods, due to their unsoundness ([#807](https://github.com/async-rs/async-std/pull/807)).
## Fixed
- Fixed wrong slice index for file reading ([#802](https://github.com/async-rs/async-std/pull/802)).
- Fixed recursive calls to `block_on` ([#799](https://github.com/async-rs/async-std/pull/799)) and ([#809](https://github.com/async-rs/async-std/pull/809)).
- Remove `default` feature requirement for the `unstable` feature ([#806](https://github.com/async-rs/async-std/pull/806)).
# [1.6.0] - 2020-05-22
See `1.6.0-beta.1` and `1.6.0-beta.2`.
# [1.6.0-beta.2] - 2020-05-19
## Added
- Added an environment variable to configure the thread pool size of the runtime. ([#774](https://github.com/async-rs/async-std/pull/774))
- Implement `Clone` for `UnixStream` ([#772](https://github.com/async-rs/async-std/pull/772))
## Changed
- For `wasm`, switched underlying `Timer` implementation to [`futures-timer`](https://github.com/async-rs/futures-timer). ([#776](https://github.com/async-rs/async-std/pull/776))
## Fixed
- Use `smol::block_on` to handle drop of `File`, avoiding nested executor panic. ([#768](https://github.com/async-rs/async-std/pull/768))
* [async-tls](https://crates.io/crates/async-tls) — Async TLS/SSL streams using **Rustls**.
* [async-native-tls](https://crates.io/crates/async-native-tls) — **Native TLS** for Async. Native TLS for futures and async-std.
* [async-tungstenite](https://crates.io/crates/async-tungstenite) — Asynchronous **WebSockets** for async-std, tokio, gio and any std Futures runtime.
* [Tide](https://crates.io/crates/tide) — Serve the web. A modular **web framework** built around async/await.
* [SQLx](https://crates.io/crates/sqlx) — The Rust **SQL** Toolkit. SQLx is a 100% safe Rust library for Postgres and MySQL with compile-time checked queries.
* [Surf](https://crates.io/crates/surf) — Surf the web. Surf is a friendly **HTTP client** built for casual Rustaceans and veterans alike.
* [Xactor](https://crates.io/crates/xactor) — Xactor is a rust actors framework based on async-std.
* [async-graphql](https://crates.io/crates/async-graphql) — A GraphQL server library implemented in rust, with full support for async/await.
## License
## License
Licensed under either of
<sup>
Licensed under either of <ahref="LICENSE-APACHE">Apache License, Version
* Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
2.0</a> or <ahref="LICENSE-MIT">MIT license</a> at your option.
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
</sup>
at your option.
#### Contribution
<br/>
<sub>
Unless you explicitly state otherwise, any contribution intentionally submitted
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
for inclusion in this crate by you, as defined in the Apache-2.0 license, shall
dual licensed as above, without any additional terms or conditions.
be dual licensed as above, without any additional terms or conditions.
@ -24,11 +24,7 @@ To sum up: Rust gives us the ability to safely abstract over important propertie
## An easy view of computation
## An easy view of computation
While computation is a subject to write a whole [book](https://computationbook.com/) about, a very simplified view suffices for us:
While computation is a subject to write a whole [book](https://computationbook.com/) about, a very simplified view suffices for us: A sequence of composable operations which can branch based on a decision, run to succession and yield a result or yield an error
- computation is a sequence of composable operations
- they can branch based on a decision
- they either run to succession and yield a result, or they can yield an error
## Deferring computation
## Deferring computation
@ -136,11 +132,11 @@ When executing 2 or more of these functions at the same time, our runtime system
## Conclusion
## Conclusion
Working from values, we searched for something that expresses *working towards a value available sometime later*. From there, we talked about the concept of polling.
Working from values, we searched for something that expresses *working towards a value available later*. From there, we talked about the concept of polling.
A `Future` is any data type that does not represent a value, but the ability to *produce a value at some point in the future*. Implementations of this are very varied and detailed depending on use-case, but the interface is simple.
A `Future` is any data type that does not represent a value, but the ability to *produce a value at some point in the future*. Implementations of this are very varied and detailed depending on use-case, but the interface is simple.
Next, we will introduce you to `tasks`, which we need to actually *run* Futures.
Next, we will introduce you to `tasks`, which we will use to actually *run* Futures.
[^1]: Two parties reading while it is guaranteed that no one is writing is always safe.
[^1]: Two parties reading while it is guaranteed that no one is writing is always safe.
@ -80,7 +80,7 @@ Tasks in `async_std` are one of the core abstractions. Much like Rust's `thread`
## Blocking
## Blocking
`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rust's `std` library will _stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour. Note that _blocking the current thread_ is not in and by itself bad behaviour, just something that does not mix well with the concurrent execution model of `async-std`. Essentially, never do this:
`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rust's `std` library will _stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour. Note that _blocking the current thread_ is not in and of itself bad behaviour, just something that does not mix well with the concurrent execution model of `async-std`. Essentially, never do this:
`async-std` provides an interface to all important primitives: filesystem operations, network operations and concurrency basics like timers. It also exposes a `task` in a model similar to the `thread` module found in the Rust standard lib. But it does not only include I/O primitives, but also `async/await` compatible versions of primitives like `Mutex`.
`async-std` provides an interface to all important primitives: filesystem operations, network operations and concurrency basics like timers. It also exposes a `task` in a model similar to the `thread` module found in the Rust standard lib. But it does not only include I/O primitives, but also `async/await` compatible versions of primitives like `Mutex`.
@ -31,7 +31,7 @@ In general, this crate will be conservative with respect to the minimum supporte
## Security fixes
## Security fixes
Security fixes will be applied to _all_ minor branches of this library in all _supported_ major revisions. This policy might change in the future, in which case we give at least _3 month_ of ahead notice.
Security fixes will be applied to _all_ minor branches of this library in all _supported_ major revisions. This policy might change in the future, in which case we give a notice at least _3 months_ ahead.
@ -4,13 +4,13 @@ Rust has two kinds of types commonly referred to as `Future`:
- the first is `std::future::Future` from Rust’s [standard library](https://doc.rust-lang.org/std/future/trait.Future.html).
- the first is `std::future::Future` from Rust’s [standard library](https://doc.rust-lang.org/std/future/trait.Future.html).
- the second is `futures::future::Future` from the [futures-rs crate](https://docs.rs/futures-preview/0.3.0-alpha.17/futures/prelude/trait.Future.html), currently released as `futures-preview`.
- the second is `futures::future::Future` from the [futures-rs crate](https://docs.rs/futures/0.3/futures/prelude/trait.Future.html).
The future defined in the [futures-rs](https://docs.rs/futures-preview/0.3.0-alpha.17/futures/prelude/trait.Future.html) crate was the original implementation of the type. To enable the `async/await` syntax, the core Future trait was moved into Rust’s standard library and became `std::future::Future`. In some sense, the `std::future::Future` can be seen as a minimal subset of `futures::future::Future`.
The future defined in the [futures-rs](https://docs.rs/futures/0.3/futures/prelude/trait.Future.html) crate was the original implementation of the type. To enable the `async/await` syntax, the core Future trait was moved into Rust’s standard library and became `std::future::Future`. In some sense, the `std::future::Future` can be seen as a minimal subset of `futures::future::Future`.
It is critical to understand the difference between `std::future::Future` and `futures::future::Future`, and the approach that `async-std` takes towards them. In itself, `std::future::Future` is not something you want to interact with as a user—except by calling `.await` on it. The inner workings of `std::future::Future` are mostly of interest to people implementing `Future`. Make no mistake—this is very useful! Most of the functionality that used to be defined on `Future` itself has been moved to an extension trait called [`FuturesExt`](https://docs.rs/futures-preview/0.3.0-alpha.17/futures/future/trait.FutureExt.html). From this information, you might be able to infer that the `futures` library serves as an extension to the core Rust async features.
It is critical to understand the difference between `std::future::Future` and `futures::future::Future`, and the approach that `async-std` takes towards them. In itself, `std::future::Future` is not something you want to interact with as a user—except by calling `.await` on it. The inner workings of `std::future::Future` are mostly of interest to people implementing `Future`. Make no mistake—this is very useful! Most of the functionality that used to be defined on `Future` itself has been moved to an extension trait called [`FuturesExt`](https://docs.rs/futures/0.3/futures/future/trait.FutureExt.html). From this information, you might be able to infer that the `futures` library serves as an extension to the core Rust async features.
In the same tradition as `futures`, `async-std` re-exports the core `std::future::Future` type. You can actively opt into the extensions provided by the `futures-preview` crate by adding it to your `Cargo.toml` and importing `FuturesExt`.
In the same tradition as `futures`, `async-std` re-exports the core `std::future::Future` type. You can actively opt into the extensions provided by the `futures` crate by adding it to your `Cargo.toml` and importing `FuturesExt`.
So how do we make sure that messages read in `connection_loop` flow into the relevant `connection_writer_loop`?
So how do we make sure that messages read in `connection_loop` flow into the relevant `connection_writer_loop`?
We should somehow maintain an`peers: HashMap<String, Sender<String>>` map which allows a client to find destination channels.
We should somehow maintain a `peers: HashMap<String, Sender<String>>` map which allows a client to find destination channels.
However, this map would be a bit of shared mutable state, so we'll have to wrap an `RwLock` over it and answer tough questions of what should happen if the client joins at the same moment as it receives a message.
However, this map would be a bit of shared mutable state, so we'll have to wrap an `RwLock` over it and answer tough questions of what should happen if the client joins at the same moment as it receives a message.
One trick to make reasoning about state simpler comes from the actor model.
One trick to make reasoning about state simpler comes from the actor model.
We can create a dedicated broker tasks which owns the `peers` map and communicates with other tasks by channels.
We can create a dedicated broker task which owns the `peers` map and communicates with other tasks using channels.
By hiding `peers` inside such an "actor" task, we remove the need for mutxes and also make serialization point explicit.
By hiding `peers` inside such an "actor" task, we remove the need for mutexes and also make the serialization point explicit.
The order of events "Bob sends message to Alice" and "Alice joins" is determined by the order of the corresponding events in the broker's event queue.
The order of events "Bob sends message to Alice" and "Alice joins" is determined by the order of the corresponding events in the broker's event queue.
```rust,edition2018
```rust,edition2018
# extern crate async_std;
# extern crate async_std;
# extern crate futures_channel;
# extern crate futures;
# extern crate futures_util;
# use async_std::{
# use async_std::{
# net::TcpStream,
# net::TcpStream,
# prelude::*,
# prelude::*,
# task,
# task,
# };
# };
# use futures_channel::mpsc;
# use futures::channel::mpsc;
# use futures_util::sink::SinkExt;
# use futures::sink::SinkExt;
# use std::sync::Arc;
# use std::sync::Arc;
#
#
# type Result<T> = std::result::Result<T,Box<dynstd::error::Error+Send+Sync>>;
# type Result<T> = std::result::Result<T,Box<dynstd::error::Error+Send+Sync>>;
Another problem is that between the moment we detect disconnection in `connection_writer_loop` and the moment when we actually remove the peer from the `peers` map, new messages might be pushed into the peer's channel.
Another problem is that between the moment we detect disconnection in `connection_writer_loop` and the moment when we actually remove the peer from the `peers` map, new messages might be pushed into the peer's channel.
To not lose these messages completely, we'll return the messages channel back to the broker.
To not lose these messages completely, we'll return the messages channel back to the broker.
This also allows us to establish a useful invariant that the message channel strictly outlives the peer in the `peers` map, and makes the broker itself infailable.
This also allows us to establish a useful invariant that the message channel strictly outlives the peer in the `peers` map, and makes the broker itself infallible.
## Final Code
## Final Code
@ -122,16 +118,16 @@ The final code looks like this:
Since the protocol is line-based, implementing a client for the chat is straightforward:
Because the protocol is line-based, the implementation is pretty straightforward:
* Lines read from stdin should be sent over the socket.
* Lines read from stdin should be sent over the socket.
* Lines read from the socket should be echoed to stdout.
* Lines read from the socket should be echoed to stdout.
Unlike the server, the client needs only limited concurrency, as it interacts with only a single user.
Although async does not significantly affect client performance (as unlike the server, the client interacts solely with one user and only needs limited concurrency), async is still useful for managing concurrency!
For this reason, async doesn't bring a lot of performance benefits in this case.
The client has to read from stdin and the socket *simultaneously*.
Programming this with threads is cumbersome, especially when implementing a clean shutdown.
With async, the `select!` macro is all that is needed.
However, async is still useful for managing concurrency!
Specifically, the client should *simultaneously* read from stdin and from the socket.
Programming this with threads is cumbersome, especially when implementing clean shutdown.
With async, we can just use the `select!` macro.
```rust,edition2018
```rust,edition2018
# extern crate async_std;
# extern crate async_std;
# extern crate futures_util;
# extern crate futures;
use async_std::{
use async_std::{
io::{stdin, BufReader},
io::{stdin, BufReader},
net::{TcpStream, ToSocketAddrs},
net::{TcpStream, ToSocketAddrs},
prelude::*,
prelude::*,
task,
task,
};
};
use futures_util::{select, FutureExt};
use futures::{select, FutureExt};
type Result<T> = std::result::Result<T,Box<dynstd::error::Error+Send+Sync>>;
type Result<T> = std::result::Result<T,Box<dynstd::error::Error+Send+Sync>>;