From 0205ea3fc829f4656bf68c0020221487c6461446 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 08:58:40 +1000 Subject: [PATCH 1/9] more apostrophes --- docs/src/concepts/futures.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/src/concepts/futures.md b/docs/src/concepts/futures.md index 31069ca8..c674a1e9 100644 --- a/docs/src/concepts/futures.md +++ b/docs/src/concepts/futures.md @@ -106,9 +106,9 @@ This function sets up a deferred computation. When this function is called, it w ## What does `.await` do? -The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the requested action (e.g. opening a file or reading all data in it) is finished. `.await?` is not special, it's just the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We’re getting futures and then immediately waiting for them? +The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the requested action (e.g. opening a file or reading all data in it) is finished. `.await?` is not special, it's just the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We're getting futures and then immediately waiting for them? -The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future finish? You don’t need to care! The marker allows the code later *executing* this piece of code (usually called the “runtime”) when it can take some time to care about all the other things it has to do. It will come back to this point when the operation you are doing in the background is done. This is why this style of programming is also called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react (by starting to read). +The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future finish? You don't need to care! The marker allows the code later *executing* this piece of code (usually called the “runtime”) when it can take some time to care about all the other things it has to do. It will come back to this point when the operation you are doing in the background is done. This is why this style of programming is also called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react (by starting to read). When executing 2 or more of these functions at the same time, our runtime system is then able to fill the wait time with handling *all the other events* currently going on. From 3df6c39e1799b92c08c21feea4aaf943d83a65d7 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 09:15:52 +1000 Subject: [PATCH 2/9] switch paragraphs around.. condense the summary and generalise the note --- docs/src/concepts/futures.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/src/concepts/futures.md b/docs/src/concepts/futures.md index c674a1e9..1048aefa 100644 --- a/docs/src/concepts/futures.md +++ b/docs/src/concepts/futures.md @@ -6,13 +6,15 @@ Futures abstract over *computation*. They describe the "what", independent of th ## Send and Sync -Luckily, concurrent Rust already has two well-known and effective concepts abstracting over sharing between concurrent parts of a program: Send and Sync. Notably, both the Send and Sync traits abstract over *strategies* of concurrent work, compose neatly, and don't prescribe an implementation. +Luckily, concurrent Rust already has two well-known and effective concepts abstracting over sharing between concurrent parts of a program: `Send` and `Sync`. Notably, both the `Send` and `Sync` traits abstract over *strategies* of concurrent work, compose neatly, and don't prescribe an implementation. -As a quick summary, `Send` abstracts over passing data in a computation over to another concurrent computation (let's call it the receiver), losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but missing support from the language side expects you to enforce the "losing access" behaviour yourself. This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not (by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and borrowing rules prevent subsequent access. +As a quick summary: -Note how we avoided any word like *"thread"*, but instead opted for "computation". The full power of `Send` (and subsequently also `Sync`) is that they relieve you of the burden of knowing *what* shares. At the point of implementation, you only need to know which method of sharing is appropriate for the type at hand. This keeps reasoning local and is not influenced by whatever implementation the user of that type later uses. +* `Send` abstracts over *passing data* in a computation to another concurrent computation (let's call it the receiver), losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but missing support from the language side expects you to enforce the "losing access" behaviour yourself. This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not (by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and borrowing rules prevent subsequent access. -`Sync` is about sharing data between two concurrent parts of a program. This is another common pattern: as writing to a memory location or reading while another party is writing is inherently unsafe, this access needs to be moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about the *how*. +* `Sync` is about *sharing data* between two concurrent parts of a program. This is another common pattern: as writing to a memory location or reading while another party is writing is inherently unsafe, this access needs to be moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about the *how*. + +Note how we avoided any word like *"thread"*, but instead opted for "computation". The full power of `Send` and `Sync` is that they relieve you of the burden of knowing *what* shares. At the point of implementation, you only need to know which method of sharing is appropriate for the type at hand. This keeps reasoning local and is not influenced by whatever implementation the user of that type later uses. `Send` and `Sync` can be composed in interesting fashions, but that's beyond the scope here. You can find examples in the [Rust Book][rust-book-sync]. From 5d7b641813b2dc35c58fa94e6df7a6f407508103 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 09:16:59 +1000 Subject: [PATCH 3/9] bullet character --- docs/src/concepts/futures.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/src/concepts/futures.md b/docs/src/concepts/futures.md index 1048aefa..1c5bc243 100644 --- a/docs/src/concepts/futures.md +++ b/docs/src/concepts/futures.md @@ -10,9 +10,9 @@ Luckily, concurrent Rust already has two well-known and effective concepts abstr As a quick summary: -* `Send` abstracts over *passing data* in a computation to another concurrent computation (let's call it the receiver), losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but missing support from the language side expects you to enforce the "losing access" behaviour yourself. This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not (by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and borrowing rules prevent subsequent access. +- `Send` abstracts over *passing data* in a computation to another concurrent computation (let's call it the receiver), losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but missing support from the language side expects you to enforce the "losing access" behaviour yourself. This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not (by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and borrowing rules prevent subsequent access. -* `Sync` is about *sharing data* between two concurrent parts of a program. This is another common pattern: as writing to a memory location or reading while another party is writing is inherently unsafe, this access needs to be moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about the *how*. +- `Sync` is about *sharing data* between two concurrent parts of a program. This is another common pattern: as writing to a memory location or reading while another party is writing is inherently unsafe, this access needs to be moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about the *how*. Note how we avoided any word like *"thread"*, but instead opted for "computation". The full power of `Send` and `Sync` is that they relieve you of the burden of knowing *what* shares. At the point of implementation, you only need to know which method of sharing is appropriate for the type at hand. This keeps reasoning local and is not influenced by whatever implementation the user of that type later uses. From ddb11bfb09c3b976a6909c8704025d0f92ea1736 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 10:07:06 +1000 Subject: [PATCH 4/9] more wording tweaks --- docs/src/concepts/futures.md | 38 +++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/docs/src/concepts/futures.md b/docs/src/concepts/futures.md index 1c5bc243..602f5d76 100644 --- a/docs/src/concepts/futures.md +++ b/docs/src/concepts/futures.md @@ -32,17 +32,17 @@ While computation is a subject to write a whole [book](https://computationbook.c ## Deferring computation -As mentioned above `Send` and `Sync` are about data. But programs are not only about data, they also talk about *computing* the data. And that's what [`Futures`][futures] do. We are going to have a close look at how that works in the next chapter. Let's look at what Futures allow us to express, in English. Futures go from this plan: +As mentioned above, `Send` and `Sync` are about data. But programs are not only about data, they also talk about *computing* the data. And that's what [`Futures`][futures] do. We are going to have a close look at how that works in the next chapter. Let's look at what Futures allow us to express, in English. Futures go from this plan: - Do X -- If X succeeds, do Y +- If X succeeded, do Y -towards +towards: - Start doing X - Once X succeeds, start doing Y -Remember the talk about "deferred computation" in the intro? That's all it is. Instead of telling the computer what to execute and decide upon *now*, you tell it what to start doing and how to react on potential events the... well... `Future`. +Remember the talk about "deferred computation" in the intro? That's all it is. Instead of telling the computer what to execute and decide upon *now*, you tell it what to start doing and how to react on potential events in the... well... `Future`. [futures]: https://doc.rust-lang.org/std/future/trait.Future.html @@ -57,10 +57,10 @@ Let's have a look at a simple function, specifically the return value: contents } -You can call that at any time, so you are in full control on when you call it. But here's the problem: the moment you call it, you transfer control to the called function. It returns a value. -Note that this return value talks about the past. The past has a drawback: all decisions have been made. It has an advantage: the outcome is visible. We can unwrap the presents of program past and then decide what to do with it. +You can call that at any time, so you are in full control on when you call it. But here's the problem: the moment you call it, you transfer control to the called function until it returns a value - eventually. +Note that this return value talks about the past. The past has a drawback: all decisions have been made. It has an advantage: the outcome is visible. We can unwrap the results of the program's past computation, and then decide what to do with it. -But here's a problem: we wanted to abstract over *computation* to be allowed to let someone else choose how to run it. That's fundamentally incompatible with looking at the results of previous computation all the time. So, let's find a type that describes a computation without running it. Let's look at the function again: +But we wanted to abstract over *computation* and let someone else choose how to run it. That's fundamentally incompatible with looking at the results of previous computation all the time. So, let's find a type that *describes* a computation without running it. Let's look at the function again: fn read_file(path: &str) -> Result { let mut file = File.open(path)?; @@ -72,7 +72,8 @@ But here's a problem: we wanted to abstract over *computation* to be allowed to Speaking in terms of time, we can only take action *before* calling the function or *after* the function returned. This is not desirable, as it takes from us the ability to do something *while* it runs. When working with parallel code, this would take from us the ability to start a parallel task while the first runs (because we gave away control). This is the moment where we could reach for [threads](https://en.wikipedia.org/wiki/Thread_). But threads are a very specific concurrency primitive and we said that we are searching for an abstraction. -What we are searching is something that represents ongoing work towards a result in the future. Whenever we say `something` in Rust, we almost always mean a trait. Let's start with an incomplete definition of the `Future` trait: + +What we are searching for is something that represents ongoing work towards a result in the future. Whenever we say "something" in Rust, we almost always mean a trait. Let's start with an incomplete definition of the `Future` trait: trait Future { type Output; @@ -80,18 +81,23 @@ What we are searching is something that represents ongoing work towards a result fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll; } -Ignore `Pin` and `Context` for now, you don't need them for high-level understanding. Looking at it closely, we see the following: it is generic over the `Output`. It provides a function called `poll`, which allows us to check on the state of the current computation. +Looking at it closely, we see the following: + +- It is generic over the `Output`. +- It provides a function called `poll`, which allows us to check on the state of the current computation. +- (Ignore `Pin` and `Context` for now, you don't need them for high-level understanding.) + Every call to `poll()` can result in one of these two cases: -1. The future is done, `poll` will return [`Poll::Ready`](https://doc.rust-lang.org/std/task/enum.Poll.html#variant.Ready) -2. The future has not finished executing, it will return [`Poll::Pending`](https://doc.rust-lang.org/std/task/enum.Poll.html#variant.Pending) +1. The computation is done, `poll` will return [`Poll::Ready`](https://doc.rust-lang.org/std/task/enum.Poll.html#variant.Ready) +2. The computation has not finished executing, it will return [`Poll::Pending`](https://doc.rust-lang.org/std/task/enum.Poll.html#variant.Pending) -This allows us to externally check if a `Future` has finished doing its work, or is finally done and can give us the value. The most simple way (but not efficient) would be to just constantly poll futures in a loop. There's optimisations here, and this is what a good runtime is does for you. +This allows us to externally check if a `Future` still has unfinished work, or is finally done and can give us the value. The most simple (but not efficient) way would be to just constantly poll futures in a loop. There are optimisations possible, and this is what a good runtime does for you. Note that calling `poll` after case 1 happened may result in confusing behaviour. See the [futures-docs](https://doc.rust-lang.org/std/future/trait.Future.html) for details. ## Async -While the `Future` trait has existed in Rust for a while, it was inconvenient to build and describe them. For this, Rust now has a special syntax: `async`. The example from above, implemented in `async-std`, would look like this: +While the `Future` trait has existed in Rust for a while, it was inconvenient to build and describe them. For this, Rust now has a special syntax: `async`. The example from above, implemented with `async-std`, would look like this: use async_std::fs::File; @@ -104,13 +110,13 @@ While the `Future` trait has existed in Rust for a while, it was inconvenient to Amazingly little difference, right? All we did is label the function `async` and insert 2 special commands: `.await`. -This function sets up a deferred computation. When this function is called, it will produce a `Future` instead of immediately returning a String. (Or, more precisely, generate a type for you that implements `Future`.) +This `async` function sets up a deferred computation. When this function is called, instead of returning the computed `String`, it will produce a `Future`. (Or, more precisely, will generate a type for you that implements `Future`.) ## What does `.await` do? -The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the requested action (e.g. opening a file or reading all data in it) is finished. `.await?` is not special, it's just the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We're getting futures and then immediately waiting for them? +The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the requested action (e.g. opening a file or reading all data in it) is finished. The `.await?` is not special, it's just the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We're getting futures and then immediately waiting for them? -The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future finish? You don't need to care! The marker allows the code later *executing* this piece of code (usually called the “runtime”) when it can take some time to care about all the other things it has to do. It will come back to this point when the operation you are doing in the background is done. This is why this style of programming is also called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react (by starting to read). +The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future finish? You don't need to care! The marker allows the component (usually called the “runtime”) in charge of *executing* this piece of code to take care of all the other things it has to do while the computation finishes. It will come back to this point when the operation you are doing in the background is done. This is why this style of programming is also called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react (by starting to read). When executing 2 or more of these functions at the same time, our runtime system is then able to fill the wait time with handling *all the other events* currently going on. From 310cda671c40789832f8b4722874ecbb460752d1 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 10:16:04 +1000 Subject: [PATCH 5/9] use rust code blocks for consistent style --- docs/src/concepts/futures.md | 58 ++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 25 deletions(-) diff --git a/docs/src/concepts/futures.md b/docs/src/concepts/futures.md index 602f5d76..b1ba3bf0 100644 --- a/docs/src/concepts/futures.md +++ b/docs/src/concepts/futures.md @@ -50,24 +50,28 @@ Remember the talk about "deferred computation" in the intro? That's all it is. I Let's have a look at a simple function, specifically the return value: - fn read_file(path: &str) -> Result { - let mut file = File.open(path)?; - let mut contents = String::new(); - file.read_to_string(&mut contents)?; - contents - } +```rust +fn read_file(path: &str) -> Result { + let mut file = File.open(path)?; + let mut contents = String::new(); + file.read_to_string(&mut contents)?; + contents +} +``` You can call that at any time, so you are in full control on when you call it. But here's the problem: the moment you call it, you transfer control to the called function until it returns a value - eventually. Note that this return value talks about the past. The past has a drawback: all decisions have been made. It has an advantage: the outcome is visible. We can unwrap the results of the program's past computation, and then decide what to do with it. But we wanted to abstract over *computation* and let someone else choose how to run it. That's fundamentally incompatible with looking at the results of previous computation all the time. So, let's find a type that *describes* a computation without running it. Let's look at the function again: - fn read_file(path: &str) -> Result { - let mut file = File.open(path)?; - let mut contents = String::new(); - file.read_to_string(&mut contents)?; - contents - } +```rust +fn read_file(path: &str) -> Result { + let mut file = File.open(path)?; + let mut contents = String::new(); + file.read_to_string(&mut contents)?; + contents +} +``` Speaking in terms of time, we can only take action *before* calling the function or *after* the function returned. This is not desirable, as it takes from us the ability to do something *while* it runs. When working with parallel code, this would take from us the ability to start a parallel task while the first runs (because we gave away control). @@ -75,11 +79,13 @@ This is the moment where we could reach for [threads](https://en.wikipedia.org/w What we are searching for is something that represents ongoing work towards a result in the future. Whenever we say "something" in Rust, we almost always mean a trait. Let's start with an incomplete definition of the `Future` trait: - trait Future { - type Output; - - fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll; - } +```rust +trait Future { + type Output; + + fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll; +} +``` Looking at it closely, we see the following: @@ -99,14 +105,16 @@ Note that calling `poll` after case 1 happened may result in confusing behaviour While the `Future` trait has existed in Rust for a while, it was inconvenient to build and describe them. For this, Rust now has a special syntax: `async`. The example from above, implemented with `async-std`, would look like this: - use async_std::fs::File; - - async fn read_file(path: &str) -> Result { - let mut file = File.open(path).await?; - let mut contents = String::new(); - file.read_to_string(&mut contents).await?; - contents - } +```rust +use async_std::fs::File; + +async fn read_file(path: &str) -> Result { + let mut file = File.open(path).await?; + let mut contents = String::new(); + file.read_to_string(&mut contents).await?; + contents +} +``` Amazingly little difference, right? All we did is label the function `async` and insert 2 special commands: `.await`. From a4cccb14503996856b5568249419a05ff65a1dad Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 10:18:11 +1000 Subject: [PATCH 6/9] link and more apostrophes --- docs/src/concepts/tasks.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/docs/src/concepts/tasks.md b/docs/src/concepts/tasks.md index 881dadca..2e5df26f 100644 --- a/docs/src/concepts/tasks.md +++ b/docs/src/concepts/tasks.md @@ -2,7 +2,7 @@ Now that we know what Futures are, we now want to run them! -In `async-std`, the `tasks` (TODO: link) module is responsible for this. The simplest way is using the `block_on` function: +In `async-std`, the [`tasks`][tasks] module is responsible for this. The simplest way is using the `block_on` function: ```rust use async_std::fs::File; @@ -29,7 +29,7 @@ fn main() { } ``` -This asks the runtime baked into `async_std` to execute the code that reads a file. Let’s go one by one, though, inside to outside. +This asks the runtime baked into `async_std` to execute the code that reads a file. Let's go one by one, though, inside to outside. ```rust async { @@ -43,7 +43,7 @@ async { This is an `async` *block*. Async blocks are necessary to call `async` functions, and will instruct the compiler to include all the relevant instructions to do so. In Rust, all blocks return a value and `async` blocks happen to return a value of the kind `Future`. -But let’s get to the interesting part: +But let's get to the interesting part: ```rust @@ -51,20 +51,20 @@ task::spawn(async { }) ``` -`spawn` takes a Future and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional bookkeeping required, e.g. if it's running or finished, where it is being placed in memory and what the current state is. This bookkeeping part is abstracted away in a `Task`. A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the operating system kernel and if it encounters a point where it needs to wait, the program itself responsible for waking it up again. We’ll talk a little bit about that later. An `async_std` task can also has a name and an ID, just like a thread. +`spawn` takes a Future and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional bookkeeping required, e.g. if it's running or finished, where it is being placed in memory and what the current state is. This bookkeeping part is abstracted away in a `Task`. A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the operating system kernel and if it encounters a point where it needs to wait, the program itself responsible for waking it up again. We'll talk a little bit about that later. An `async_std` task can also has a name and an ID, just like a thread. For now, it is enough to know that once you `spawn`ed a task, it will continue running in the background. The `JoinHandle` in itself is a future that will finish once the `Task` ran to conclusion. Much like with `threads` and the `join` function, we can now call `block_on` on the handle to *block* the program (or the calling thread, to be specific) to wait for it to finish. ## Tasks in `async_std` -Tasks in `async_std` are one of the core abstractions. Much like Rust’s `thread`s, they provide some practical functionality over the raw concept. `Tasks` have a relationship to the runtime, but they are in themselves separate. `async_std` tasks have a number of desirable properties: +Tasks in `async_std` are one of the core abstractions. Much like Rust's `thread`s, they provide some practical functionality over the raw concept. `Tasks` have a relationship to the runtime, but they are in themselves separate. `async_std` tasks have a number of desirable properties: - They are allocated in one single allocation - All tasks have a *backchannel*, which allows them to propagate results and errors to the spawning task through the `JoinHandle` - The carry desirable metadata for debugging - They support task local storage -`async_std`s task api handles setup and teardown of a backing runtime for you and doesn’t rely on a runtime being started. +`async_std`s task api handles setup and teardown of a backing runtime for you and doesn't rely on a runtime being started. ## Blocking @@ -126,4 +126,6 @@ That might seem odd at first, but the other option would be to silently ignore p `async_std` comes with a useful `Task` type that works with an API similar to `std::thread`. It covers error and panic behaviour in a structured and defined way. -Tasks are separate concurrent units and sometimes they need to communicate. That’s where `Stream`s come in. +Tasks are separate concurrent units and sometimes they need to communicate. That's where `Stream`s come in. + +[tasks]: https://docs.rs/async-std/latest/async_std/task/index.html From 6b23760c4a2945c993bd6d2e49ac3d9d204edc87 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 10:32:42 +1000 Subject: [PATCH 7/9] additional tweaks --- docs/src/concepts/tasks.md | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/docs/src/concepts/tasks.md b/docs/src/concepts/tasks.md index 2e5df26f..0b84b8c5 100644 --- a/docs/src/concepts/tasks.md +++ b/docs/src/concepts/tasks.md @@ -1,6 +1,6 @@ # Tasks -Now that we know what Futures are, we now want to run them! +Now that we know what Futures are, we want to run them! In `async-std`, the [`tasks`][tasks] module is responsible for this. The simplest way is using the `block_on` function: @@ -51,9 +51,11 @@ task::spawn(async { }) ``` -`spawn` takes a Future and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional bookkeeping required, e.g. if it's running or finished, where it is being placed in memory and what the current state is. This bookkeeping part is abstracted away in a `Task`. A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the operating system kernel and if it encounters a point where it needs to wait, the program itself responsible for waking it up again. We'll talk a little bit about that later. An `async_std` task can also has a name and an ID, just like a thread. +`spawn` takes a `Future` and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional bookkeeping required, e.g. whether it's running or finished, where it is being placed in memory and what the current state is. This bookkeeping part is abstracted away in a `Task`. -For now, it is enough to know that once you `spawn`ed a task, it will continue running in the background. The `JoinHandle` in itself is a future that will finish once the `Task` ran to conclusion. Much like with `threads` and the `join` function, we can now call `block_on` on the handle to *block* the program (or the calling thread, to be specific) to wait for it to finish. +A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the operating system kernel, and if it encounters a point where it needs to wait, the program itself is responsible for waking it up again. We'll talk a little bit about that later. An `async_std` task can also have a name and an ID, just like a thread. + +For now, it is enough to know that once you have `spawn`ed a task, it will continue running in the background. The `JoinHandle` is itself a future that will finish once the `Task` has run to conclusion. Much like with `threads` and the `join` function, we can now call `block_on` on the handle to *block* the program (or the calling thread, to be specific) and wait for it to finish. ## Tasks in `async_std` @@ -61,14 +63,14 @@ Tasks in `async_std` are one of the core abstractions. Much like Rust's `thread` - They are allocated in one single allocation - All tasks have a *backchannel*, which allows them to propagate results and errors to the spawning task through the `JoinHandle` -- The carry desirable metadata for debugging +- The carry useful metadata for debugging - They support task local storage -`async_std`s task api handles setup and teardown of a backing runtime for you and doesn't rely on a runtime being started. +`async_std`s task api handles setup and teardown of a backing runtime for you and doesn't rely on a runtime being explicitly started. ## Blocking -`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rusts stdlib will _stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour. Note that _blocking the current thread_ is not in and by itself bad behaviour, just something that does not mix well with they concurrent execution model of `async-std`. Essentially, never do this: +`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rust's `std` library will _stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour. Note that _blocking the current thread_ is not in and by itself bad behaviour, just something that does not mix well with the concurrent execution model of `async-std`. Essentially, never do this: ```rust fn main() { @@ -79,13 +81,13 @@ fn main() { } ``` -If you want to mix operation kinds, consider putting such operations on a `thread`. +If you want to mix operation kinds, consider putting such blocking operations on a separate `thread`. ## Errors and panics -`Task`s report errors through normal channels: If they are fallible, their `Output` should be of kind `Result`. +Tasks report errors through normal patterns: If they are fallible, their `Output` should be of kind `Result`. -In case of `panic`, behaviour differs depending on if there's a reasonable part that addresses the `panic`. If not, the program _aborts_. +In case of `panic`, behaviour differs depending on whether there's a reasonable part that addresses the `panic`. If not, the program _aborts_. In practice, that means that `block_on` propagates panics to the blocking component: @@ -102,7 +104,7 @@ thread 'async-task-driver' panicked at 'test', examples/panic.rs:8:9 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace. ``` -While panicing a spawned tasks will abort: +While panicing a spawned task will abort: ```rust task::spawn(async { From a41b87205d9547fb1abeb8b9388012a7bc7149b7 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 11:50:59 +1000 Subject: [PATCH 8/9] distributed massages are the best --- docs/src/tutorial/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/tutorial/index.md b/docs/src/tutorial/index.md index 7509dd20..20188b85 100644 --- a/docs/src/tutorial/index.md +++ b/docs/src/tutorial/index.md @@ -3,7 +3,7 @@ Nothing is as simple as a chat server, right? Not quite, chat servers already expose you to all the fun of asynchronous programming: how do you handle client connecting concurrently. How do handle them disconnecting? -How do your distribute the massages? +How do your distribute the messages? In this tutorial, we will show you how to write one in `async-std`. From 9974ff692e332ab25d8faeec2835a50df1b8b9e9 Mon Sep 17 00:00:00 2001 From: Daniel Carosone Date: Sat, 17 Aug 2019 11:58:35 +1000 Subject: [PATCH 9/9] markup and typo nits --- docs/src/tutorial/index.md | 4 ++-- docs/src/tutorial/specification.md | 9 ++++----- 2 files changed, 6 insertions(+), 7 deletions(-) diff --git a/docs/src/tutorial/index.md b/docs/src/tutorial/index.md index 20188b85..edc2ae69 100644 --- a/docs/src/tutorial/index.md +++ b/docs/src/tutorial/index.md @@ -2,8 +2,8 @@ Nothing is as simple as a chat server, right? Not quite, chat servers already expose you to all the fun of asynchronous programming: how -do you handle client connecting concurrently. How do handle them disconnecting? -How do your distribute the messages? +do you handle clients connecting concurrently. How do you handle them disconnecting? +How do you distribute the messages? In this tutorial, we will show you how to write one in `async-std`. diff --git a/docs/src/tutorial/specification.md b/docs/src/tutorial/specification.md index 4644116e..90de7674 100644 --- a/docs/src/tutorial/specification.md +++ b/docs/src/tutorial/specification.md @@ -8,15 +8,15 @@ Protocol consists of utf-8 messages, separated by `\n`. The client connects to the server and sends login as a first line. After that, the client can send messages to other clients using the following syntax: -``` -login1, login2, ... login2: message +```text +login1, login2, ... loginN: message ``` Each of the specified clients than receives a `from login: message` message. A possible session might look like this -``` +```text On Alice's computer: | On Bob's computer: > alice | > bob @@ -29,7 +29,6 @@ On Alice's computer: | On Bob's computer: The main challenge for the chat server is keeping track of many concurrent connections. The main challenge for the chat client is managing concurrent outgoing messages, incoming messages and user's typing. - ## Getting Started Let's create a new Cargo project: @@ -45,4 +44,4 @@ At the moment `async-std` requires Rust nightly, so let's add a rustup override $ rustup override add nightly $ rustc --version rustc 1.38.0-nightly (c4715198b 2019-08-05) -``` \ No newline at end of file +```