I'm bad at working on items on my todo list, and I'm really good at procrastinating. So I decided to procrastinate working on my todolist items by building a display showing all my todolist items, in hopes that it would motivate me to do more todolist items in the future.
Raspberry PI
E Paper display
I placed the PI and the display on a larger piece of wood, where I plan to add more e ink status displays in the future (E ink screens just feel so futuristic!). I also added a magnetized pinboard where I can put things like concert tickets and where my friends can leave edgy notes.
React and Plasmic are used to render my todoist todo list the way I want it to look like. Rust code glues it all together and uses headless-chrome to render the the react site in chromium, takes a screenshot and passes it too the display.
I wanted to use something visual to design the "ui" of the display, so I decided to use plasmic which we've used successfully at work before. It's a visual design tool that generates react code, which worked well for my usecase.
The data is fetched using the Todoist Javascript client:
const getTasks = async (client: TodoistApi | null) => {
if (!client) return undefined;
const tasks = await client.getTasks({
filter: "today | overdue",
});
tasks.sort((a, b) => {
return new Date(a.due?.date || 0) > new Date(b.due?.date || 0) ? 1 : -1;
});
const today = tasks.filter(task => task.due?.date === new Date().toISOString().split("T")[0]);
const due = tasks.filter(task => task.due?.date !== new Date().toISOString().split("T")[0]);
return [...today, ...due];
}
And then passed to plasmic via a plasmic DataProvider:
export const TodoListProvider = ({children}: { children: React.ReactNode }) => {
const client = useMemo(() => getClient(), []);
const {data} = useSwr("tasks", () => getTasks(client), {
refreshInterval: 30000,
});
useEffect(() => {
if (data !== undefined) {
(window as any)["version"] = ((window as any)["version"] || 0) + 1;
console.log("data changed", data, (window as any)["version"]);
}
}, [data]);
if (!client) return <Navigate to={"/token-form"}/>;
return <DataProvider name={"todos"} data={data}>{children}</DataProvider>;
}
I'm sure there would be more elegant and efficient solutions, but I'm just using SWR to fetch the todolist data every 30 seconds and relying on SWR to figure out if something changed. If something did change, in the useEffect I set a
version
property on the window, which is then queried by the rust code.
The rust code has multiple parts which glue everything together. There is the main loop, that launches headless chrome and checks if something changed, then there is the code that converts the png to a image that can be shown on the e ink screen and there is a simple axum webserver that hosts the static react files.
The most interesting part is probably the code generating the image in the right format for the e ink display. I'm using the dither crate to dither the image and then manually convert it to the right binary format for the waveshare display.
Dithering the image:
lazy_static::lazy_static! {
static ref PALETTE: Vec<RGB<u8>> = palette::parse(include_str!("../../palette.plt")).unwrap();
}
pub fn dither(image: Vec<u8>) -> anyhow::Result<Vec<u8>> {
let img: RgbImage = {
let mut target = ImageBuffer::from_pixel(480, 800, Rgb { 0: [255, 255, 255] });
let image = image::load_from_memory(&image).unwrap().into_rgb8();
utils::fit_image(&mut target, &image);
target
};
let rotated = imageops::rotate90(&img);
let img = rotated.pixels().map(|p| RGB::from(p.0));
let img: Img<RGB<f64>> = Img::<RGB<u8>>::new(img, 800)
.unwrap()
.convert_with(|rgb| rgb.convert_with(f64::from));
let dithered_img = ditherer::ATKINSON
.dither(img, palette::quantize(&PALETTE))
.convert_with(|rgb| rgb.convert_with(clamp_f64_to_u8));
This opens the image from the provided data, parses it, rotates the image by 90 degree (since my e ink display is rotated) and finally dithers it with the atkinson dither method. Differnt dithering algorithms are presented here. I found the atkinson method works well for my usecase.
The palette is loaded from a text file defining the three different colors to use as a reference during the dithering. It looks like this:
000000
ffffff
ff0000
Then I convert the image to the right format:
let dithered_vec = dithered_img
.into_vec();
let get_color = |color: u8| {
dithered_vec
.chunks_exact(8)
.map(move |pixels| {
let mut val = 0;
for (i, pixel) in pixels.iter().enumerate() {
let p = if PALETTE.iter().position(|col| pixel == col).unwrap() as u8 == color {
1
} else { 0 };
val = val | (p << 7 - i);
}
val
})
};
let data = get_color(1).chain(get_color(2)).collect();
Ok(data)
}
The output from the dithering is just a regular bitmap where every pixel is either ffffff 000000 or ff0000.
For the black, white and red e ink screen, the image data format consists of two concatenated buffers, one bit per pixel, one containing the white image data and one the red. To get these, for each color, I iterate through the dithered image data in chunks of 8 byte, check each byte to see if it's the current color (red or white) and if so, I flip the corresponding bit in the output u8 to 1.
The color u8 in the get_color closure basically corresponds to the index of the color in the palette.
To communicate with the display I use the epd-waveshare crate. Support for the black, white, red screen is not released yet, so I had to add it directly from the github repo.
Displaying the image is quite trivial with the crate:
pub fn display(img: Vec<u8>) -> anyhow::Result<()> {
let gpio = Gpio::new()?;
// Configure Digital I/O Pin to be used as Chip Select for SPI
let mut cs = gpio.get(8)?.into_output();
cs.set_high();
let busy = gpio.get(24)?.into_input();
let mut dc = gpio.get(25)?.into_output();
dc.set_high();
let mut rst = gpio.get(17)?.into_output();
rst.set_high();
let mut delay = Delay {};
let mut spi = Spi::new(Spi0, SlaveSelect::Ss0, 4_000_000, spi::Mode::Mode0)?;
// Setup EPD
let mut epd = Epd7in5::new(&mut spi, cs, busy, dc, rst, &mut delay, Some(200_000))?;
println!("EPD initialized");
epd.wait_until_idle(&mut spi, &mut delay)?;
// Display updated frame
println!("Updating frame");
epd.update_frame(&mut spi, img.as_slice(), &mut delay)?;
epd.wait_until_idle(&mut spi, &mut delay)?;
println!("Frame updated");
epd.display_frame(&mut spi, &mut delay)?;
epd.wait_until_idle(&mut spi, &mut delay)?;
epd.sleep(&mut spi, &mut delay)?;
Ok(())
}
I just had trouble getting it to display anything, in the beginning I used linux-embedded-hal and I had to switch to rppal to get it to work, not sure why though.
Similar to the include_file! macro, there is a include_dir! macro, which makes it really simple to bundle static html files with a rust binary:
static STATIC_DIR: Dir<'_> = include_dir!("$CARGO_MANIFEST_DIR/web/build");
With this STATIC_DIR we can make a simple axum handler that serves the files:
async fn static_path(path: Option<Path<String>>) -> impl IntoResponse {
let path = path.map(|p| p.0).unwrap_or_else(|| "index.html".to_string());
let path = path.trim_start_matches('/');
let mime_type = mime_guess::from_path(path).first_or_text_plain();
match STATIC_DIR.get_file(path).or(STATIC_DIR.get_file("index.html")) {
None => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(body::boxed(Empty::new()))
.unwrap(),
Some(file) => Response::builder()
.status(StatusCode::OK)
.header(
header::CONTENT_TYPE,
HeaderValue::from_str(mime_type.as_ref()).unwrap(),
)
.body(body::boxed(Full::from(file.contents())))
.unwrap(),
}
}
pub fn serve() -> anyhow::Result<()> {
let rt = runtime::Builder::new_current_thread().enable_io().build()?;
rt.block_on(async {
let app = axum::Router::new()
.route("/", get(static_path))
.route("/*path", get(static_path));
let addr = std::net::SocketAddr::from(([127, 0, 0, 1], 3000));
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await?;
Ok(())
})
}
We launch chrome, navigate to our site and check if something changed:
let browser = Browser::new(LaunchOptions {
window_size: Some((480, 800)),
headless: true,
..LaunchOptions::default()
})?;
let tab = browser.new_tab()?;
tab.navigate_to(&format!("{}?token={}", address, args.todoist_token))?;
let mut version = 0;
loop {
sleep(std::time::Duration::from_secs(1));
let w_version = tab.evaluate("window.version", false)?;
let current_version = w_version.value.map(|v| v.as_u64()).flatten().unwrap_or(0u64);
if current_version > version {
version = current_version;
let png = tab.capture_screenshot(
Page::CaptureScreenshotFormatOption::Png,
None,
None,
true)?;
let img = image::dither(png)?;
display::display(img)?;
}
}
That's it! If you'd like to see the full code it's on my github: https://github.com/lucasmerlin/todopi
A couple of years ago when talking to a friend, we had the idea to build a digital picture frame that always showed the dankest memes from r/dankmemes and r/me_irl.
I got to work and ordered a small tft monitor for one of the PI's I had lying around. I made a website (https://dankmeme-gallery.web.app/) that displayed a slideshow of hot dank memes from reddit and made that website show in a fullscreen Chrome window on the PI. It worked, but it sucked because the display had shitty viewing angles and it probably used a ton of energy and always had to be plugged in.
We recently discussed if it would be possible to build a better version of the meme display nowadays and to my surprise I discovered that there are color E-Ink displays now. These can display 7 different colors, which might sound limiting, compared to the 16.7 million colors a typical modern monitor can show but if we use dithering the results look pretty nice (when not looking too close).
Since the Inky Impression was sold out everywhere in Germany and I wanted to use the PI Pico I chose the Waveshare e-Paper Module. But the Inky would've also been fine, and they also offer a Pi Pico version.
Other Parts:
The battery, holder and charger circuit are optional but it's nice if the frame doesn't need a wire connected.