Scraping off the Dust: Redeploy of my Rust web app

I do a redeploy of my Rust web app with the Ubuntu 18.04 image on AWS.

Part of a Series: Designing a Full-Featured WebApp with Rust
Part 1: Piecing Together a Rust Web Application
Part 2: My Next Step in Rust Web Application Dev
Part 3: It’s Not a Web Application Without a Database
Part 4: Better Logging for the Web Application
Part 5: Rust Web App Session Management with AWS
Part 6: OAuth Requests, APIs, Diesel, and Sessions
Part 7: Scraping off the Dust: Redeploy of my Rust web app
Part 8: Giving My App Secrets to the AWS SecretManager

Life hasBusy as a bee

Life has been busy – no apologies or excuses, but, ya know, it’s 2020. Yet, I’m trying to slowly make my way back into playing with Rust. I decided to move my EC2 instance from AWS-Linux to the Ubuntu image; for one, I got tired of fighting with LetsEncode to get it to renew my SSL cert every 3 months. Also, I wanted to see how a redeploy of my Rust web app would go and if it still worked (why wouldn’t it?). So, lets see how tough it is to get my environment back to the same place. I took some notes (in case I needed to restart, sigh), so let’s go through it.

Go back to Part 1 to see what this fake web app is about and how I got here – I need to reread it myself! So, first, this is what I ended up needing to add to what I got from the default Ubuntu image:

sudo apt install build-essential checkinstall zlib1g-dev pig-config libssl-dev libpq-dev postgresql postgresql-contrib -y

Lots of that was needed in order to get OpenSSL installed, I was following along hints here. Continuing those instructions, I did:

cd /usr/local/src/
sudo wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
sudo tar -xf openssl-1.1.1g.tar.gz

cd openssl-1.1.1g
sudo ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib
sudo make
sudo make test
sudo make install
sudo echo "/usr/local/ssl/lib" > /etc/ld.so.conf.d/openssl-1.1.1g.conf
sudo ldconfig -v
sudo mv /usr/bin/c_rehash /usr/bin/c_rehash.backup
sudo mv /usr/bin/openssl /usr/bin/openssl.backup
sudo nano /etc/environment # to add "/usr/local/ssl/bin" to the PATH

Next, instead of solely storing my code on a potentially tenuous EC2 server, I wanted to keep it backed up on my Google Drive (or whatever you like, this solution works with MANY network storage). I used rclone for my Raspberry Pi photo frame so I was familiar with that already. This is weird though, I don’t really need this for projects I store in GitHub… gotta think about it… maybe I just need a /gdrive synced dir for “things”.

curl https://rclone.org/install.sh | sudo bash
rclone config # to add google drive and authorize it

mkdir ~/projects
mkdir ~/projects/rust



Ok, the most fun step!!

curl https://sh.rustup.rs -sSf | sh
cd ~/projects/rust
git clone git@github.com:jculverhouse/pinpoint_shooting.git

I need nginx for my app

sudo apt install nginx
sudo service nginx start

And now the much more reliable LetsEncrypt using Ubuntu 18.04

# follow instructions at https://certbot.eff.org/lets-encrypt
# setup root cronjob to renew once/week

For my Rocket-powered Rust app, I followed some reminders here to connect it to nginx. Simple enough, really. What’s mostly relevant:

...
server_name pinpointshooting.com; # managed by Certbot
location / {
    proxy_pass http://127.0.0.1:3000;
}
...

What? Nginx still has TLS 1 and 1.1 turned on by default? Followed this and removed those, tested the config, and restarted nginx. All of that I checked with SSLLabs via https://www.ssllabs.com/ssltest/analyze.html :

sudo nano /etc/nginx/nginx.conf # to remove TLS1 TLS1.1 from any line
sudo nano /etc/letsencrypt/options-ssl-nginx.conf # to remove TLS1 TLS1.1 from any line
sudo nginx -t
sudo service nginx reload

I’ll need Postgres for my PinpointShooting app as well, found some steps to follow here, plus I needed to setup for my own app and run the initial migrations to get it up-to-date. That involved another change so I could login with the password from a non-user-account.

cargo install diesel_cli --no-default-features --features posters 
psql -d postgres -U postgres
  create user pinpoint password 'yeahrightgetyourown'; # save this in file .env in app dir
  create database pinpoint;
  grant all privileges on database pinpoint to pinpoint;

sudo nano /etc/postgresql/10/main/pg_hba.conf # to edit "local all all" line to be md5 instead of peer

sudo service postgresql restart
psql -d postgres -U pinpoint # to test password above; just exit

cd ~/projects/rust/pinpointshooting
diesel migration run

Finally:

rustup default nightly # because Rocket, sigh...
cargo update
cargo build --release
target/release/pps &

And, we’re back online! Turns out, a redeploy of my Rust web app was about as easy as I could expect! If the app happens to be running, check it out here (though, there isn’t much to see or anything to do): pinpointshooting.com. Also, browse the repo and feel free to send me comments on how to be better about using idiomatic Rust!

OAuth Requests, APIs, Diesel, and Sessions

Part of a Series: Designing a Full-Featured WebApp with Rust
Part 1: Piecing Together a Rust Web Application
Part 2: My Next Step in Rust Web Application Dev
Part 3: It’s Not a Web Application Without a Database
Part 4: Better Logging for the Web Application
Part 5: Rust Web App Session Management with AWS
Part 6: OAuth Requests, APIs, Diesel, and Sessions
Part 7: Scraping off the Dust: Redeploy of my Rust web app
Part 8: Giving My App Secrets to the AWS SecretManager

Intertwined woven basket material.. much like Oauth, accounts, APIs, diesel, and sessions are in this project (probably any project)
Things are starting to intertwine…

Some big changes in the repository just recently. I added Google Signin and Facebook Signin OAuth connections. I’m thinking I may not even configure an internal password on the site for users and instead just require one of those options. Probably I’ll add more, like 500px.com and/or Flickr, given the site’s purpose. A password field is still in my database though, so I haven’t given up the idea completely. Also, the OAuth requests create accounts using Diesel.

Users (now identified as shooters – we photographers haven’t given up that term) – are now written to the db. I really fought with one Diesel feature, so that bit is still commented out in the code. In addition, I added my first API to POST to – so another step with the Rocketcrate as well! I’d like to work my way into playing with a GraphQL endpoint so I can play with that as well!! (What’s the limit on Crate dependencies in a project anyway?!) I’m starting to think I won’t be able to tackle all of this in a single post – but let start!

OAuth vs Old and New Accounts

When a user arrives on the site, I check for a cookie with a session id (see my previous post). I decided, for now, I would use the User Agent (plus some random characters) as a fingerprint for creating the session id. So, when I am able to get a session_id from a cookie, I want to verify the User Agent is the same and that the session hasn’t expired. If the user arrives brand new, without a cookie, I immediately create an empty, no-user session for them. All of this is done, for now, right at the top of my index() route.

<src/routes.rs>

...
#[get("/")]
pub fn index(mut cookies: Cookies, nginx: Nginx) -> rocket_contrib::templates::Template {
    let session = get_or_setup_session(&mut cookies, &nginx);
    ...

After the index page loads, it shows the Google and Facebook Sign In buttons. Clicking one of those, they do the validation dance and get permission from the user. When that is granted, my app gets a token back which I send up to the server via a POST to /api/v1/tokensignin.

<src/api.rs>

use rocket::{http::Cookies, post, request::Form, FromForm};

use crate::oauth::*;
use crate::routes::Nginx;
use crate::session::*;

#[derive(FromForm)]
pub struct OAuthReq {
    pub g_token: Option<String>,  // google login req
    pub fb_token: Option<String>, // facebook login req`
    pub name: String,
    pub email: String,
}

#[post("/api/v1/tokensignin", data = "<oauth_req>")]
pub fn tokensignin(mut cookies: Cookies, nginx: Nginx,
        oauth_req: Form<OAuthReq>) -> String
{
    let mut session = get_or_setup_session(&mut cookies, &nginx);

    if let Some(token) = &oauth_req.g_token {
        match verify_google_oauth(&mut session, &token,
            &oauth_req.name, &oauth_req.email)
        {
            true => {
                session.google_oauth = true;
                save_session_to_ddb(&mut session);
                "success".to_string()
            }
            false => {
                session.google_oauth = false;
                save_session_to_ddb(&mut session);
                "failed".to_string()
            }
        }
    } else if let Some(token) = &oauth_req.fb_token {
        match verify_facebook_oauth(&mut session, &token,
            &oauth_req.name, &oauth_req.email)
        {
            true => {
                session.facebook_oauth = true;
                save_session_to_ddb(&mut session);
                "success".to_string()
            }
            false => {
                session.facebook_oauth = false;
                save_session_to_ddb(&mut session);
                "failed".to_string()
            }
        }
    } else {
        "no token sent".to_string()
    }
}

OAuth Requests via HTTP POSTs

This is how you allow for a POST and form data to come in – you setup a struct (OAuthReq in my example) of what you expect and bring that in as an input param. Plus, I am also bringing in any cookies that arrive with the request plus some Nginx headers so I have access to UserAgent. In the code so far, I’m either verifying a Google or Facebook token. Let’s look at the Google example (the Facebook one is nearly the same). Here are the relevant parts, but I’ll break some pieces down and go through it:

<src/oauth.rs>

...
pub fn verify_google_oauth(
    session: &mut Session,
    token: &String,
    name: &String,
    email: &String,
) -> bool {
    let mut google = google_signin::Client::new();
    google.audiences.push(CONFIG.google_api_client_id.clone());

    let id_info = google.verify(&token).expect("Expected token to be valid");
    let token = id_info.sub.clone();

    verify_token(session, "google".to_string(), &token, &name, &email)
}

Which leads right away to a big match:

fn verify_token(
    session: &mut Session,
    vendor: String,
    token: &String,
    name: &String,
    email: &String,
) -> bool {
    use crate::schema::oauth::dsl::*;
    use crate::schema::shooter::dsl::*;
    let connection = connect_pgsql();
    match oauth
        .filter(oauth_vendor.eq(&vendor))
        .filter(oauth_user.eq(&token))
        .first::<Oauth>(&connection)
    {

With the OK arm:

        // token WAS found in oauth table
        Ok(o) => {
            if let Some(id) = session.shooter_id {
                if id == o.shooter_id {
                    return true;
                } else {
                    return false;
                }
            } else {
                // log in user - what IS the problem with BelongsTo!?
                //if let Ok(s) = Shooter::belonging_to(&o)
                //    .load::<Shooter>(&connection)
                //{
                //    session.shooter_id = Some(shooter.shooter_id);
                //    session.shooter_name = Some(shooter.shooter_name);
                //    session.email_address = Some(shooter.email);
                return true;
                //} else {
                //    return false;
                //}
            }
        }

And the ERR arms:

        // token not found in oauth table
        Err(diesel::NotFound) => match session.shooter_id {
            Some(id) => {
                create_oauth(&connection, &vendor, token, id);
                true
            }
            None => match shooter
                .filter(shooter_email.eq(&email))
                .first::<Shooter>(&connection)
            {
                // email address WAS found in shooter table
                Ok(s) => {
                    create_oauth(&connection, &vendor, token, s.shooter_id);
                    true
                }
                // email address not found in shooter table
                Err(diesel::NotFound) => {
                    let this_shooter =
                        create_shooter(&connection, name, None,
                            email, &"active".to_string());
                    session.shooter_id = Some(this_shooter.shooter_id);
                    create_oauth(&connection, &vendor, token,
                        this_shooter.shooter_id);
                    true
                }
                Err(e) => {
                    panic!("Database error {}", e);
                }
            },
        },
        Err(e) => {
            panic!("Database error {}", e);
        }
    }
}



Simple Queries with Diesel

Breaking all that code down to smaller bits: first, I query the PgSQL database for the given oauth user:

match oauth
    .filter(oauth_vendor.eq(&vendor))
    .filter(oauth_user.eq(&token))
    .first::<Oauth>(&connection) {
        Ok(o) => { ... }
        Err(diesel::NotFound) => { ... }
        Err(e) => { ... }
}

Check the oauth table for records WHERE (filter) the oauth_vendor is (google or facebook) AND I’ve already stored the same validated oauth_user. I will get back either Ok(o) or Err(diesel::NotFound) … (or some worse error message), so I make a pattern with those 3 arms.

If we did get a hit from the DB, that session id is already tied to a shooter_id (user id) unless something is very wrong. So, IF we also have a shooter_id defined in our current session, I just need to verify that they match and return true or false. But, if we don’t have a shooter_id in our session, we know the oauth is tied to a shooter in the db, so this will log them in. Diesel has an easy way to get that parent record, which is what this should do:

// if let Ok(s) = Shooter::belonging_to(&o).load::<Shooter>(&connection) {
   ...

I fought and fought to get this work, but you can see it is still commented out. From posts and chat around the Internet, I believe it can work – I think I either have a scope problem or my models aren’t setup correctly… this is how they look:

<src/model.rs>

...
#[derive(Identifiable, Queryable, Debug, PartialEq)]
#[table_name = "shooter"]
#[primary_key("shooter_id")]
pub struct Shooter {
    pub shooter_id: i32,
    pub shooter_name: String,
    pub shooter_password: String,
    pub shooter_status: String,
    pub shooter_email: String,
    pub shooter_real_name: String,
    pub shooter_create_time: chrono::NaiveDateTime,
    pub shooter_active_time: Option<chrono::NaiveDateTime>,
    pub shooter_inactive_time: Option<chrono::NaiveDateTime>,
    pub shooter_remove_time: Option<chrono::NaiveDateTime>,
    pub shooter_modify_time: chrono::NaiveDateTime,
}
...
#[derive(Identifiable, Associations, Queryable, Debug, PartialEq)]
#[belongs_to(Shooter, foreign_key = "shooter_id")]
#[table_name = "oauth"]
#[primary_key("oauth_id")]
pub struct Oauth {
    pub oauth_id: i32,
    pub oauth_vendor: String,
    pub oauth_user: String,
    pub shooter_id: i32,
    pub oauth_status: String,
    pub oauth_create_time: chrono::NaiveDateTime,
    pub oauth_last_use_time: chrono::NaiveDateTime,
    pub oauth_modify_time: chrono::NaiveDateTime,
}

I’ll get it to work eventually – I really hope it isn’t failing because I didn’t specifically name my primary fields just id like in the examples Diesel gives in their guides. It seems like naming shooter_id in table oauth to match shooter_id in the shooter table should make things obvious. Hopefully we aren’t forced to always use id as the primary field… no, that can’t be it.

Anyway, back to verifying. The other main case is that an oauth record with this token is NOT found in the table. Which means it is a new connection we haven’t seen before. If the session is already logged in, we just need to attach this oauth token to the logged in user and return true!

Some(id) => {
    create_oauth(&connection, &vendor, token, id);
    true
}

Otherwise, two choices – we will try to match on an existing shooter via the email address. If we find a match, we log them in and again attach this oauth token to their shooter record.

 None => match shooter
     .filter(shooter_email.eq(&email))
     .first::<Shooter>(&connection)
 {
     // email address WAS found in shooter table
     Ok(s) => {
         create_oauth(&connection, &vendor, token, s.shooter_id);
         true
     }

Otherwise, we don’t get a hit; that is, we haven’t seen this oauth token before AND we haven’t seen this validated email address before. We have to call that a brand new shooter account. I mentioned we create accounts from the OAuth requests using Diesel – this is where that happens. In this case, we create both the shooter record and the oauth record, linking them together.

// email address not found in shooter table
Err(diesel::NotFound) => {
    let this_shooter =
        create_shooter(&connection, name, None, email,
            &"active".to_string());
    session.shooter_id = Some(this_shooter.shooter_id);
    create_oauth(&connection, &vendor, token, this_shooter.shooter_id);
    true
}

Using Diesel to Insert Records

As we fall back out of the stack of functions we’ve called, because we return true here the session will get updated with the shooter_id – they are now logged in. Also, the shooter and oauth records are saved, so if they come back, they can just validate and be logged into their same account again. Here are the two methods that create those records:

<src/shooter.rs>

...
pub fn create_shooter<'a>(
    connection: &PgConnection,
    name: &'a String,
    password: Option<&'a String>,
    email: &'a String,
    status: &'a String,
) -> Shooter {
    use crate::schema::shooter::dsl::*;

    let new_shooter = NewShooter {
        shooter_name: name.to_string(),
        shooter_password: match password {
            Some(p) => p.to_string(),
            None => thread_rng()
                .sample_iter(&Alphanumeric)
                .take(64)
                .collect::<String>(),
        },
        shooter_status: status.to_string(),
        shooter_email: email.to_string(),
        shooter_real_name: name.to_string(),
    };

    diesel::insert_into(shooter)
        .values(&new_shooter)
        .get_result(connection)
        .expect("Error saving new Shooter")
}
<src/oauth.rs>

...
pub fn create_oauth<'a>(
    connection: &PgConnection,
    vendor: &'a String,
    user_id: &'a String,
    shooterid: i32,
) -> Oauth {
    use crate::schema::oauth::dsl::*;

    let new_oauth = NewOauth {
        oauth_vendor: vendor.to_string(),
        oauth_user: user_id.to_string(),
        shooter_id: shooterid,
    };

    diesel::insert_into(oauth)
        .values(&new_oauth)
        .get_result(connection)
        .expect("Error saving new Oauth")
}

As far as writing these new records to PgSQL – in both cases, we have NewShooter and NewOauth structs that allow us to set the bare minimum of fields without having to worry about the fields that PgSQL will default for us (like the create_date fields). We setup the appropriate struct and pass it to insert_into(). Adding .get_result() will return the newly created record to us, so we have access to the brand new shooter_id or oauth_id.

Complexity

If a user comes to the site, signs in with one OAuth (which creates their shooter record and attaches that oauth token) and then signs in with the other, this logic figures out they are validated to be the same person, so creates just a single shooter record with two oauth records, and both point to the one user. If they come back, they can authenticate via either third-party and are allowed back in.

Ok, more to come as I figure out other problems. I haven’t gone through that logic tightly enough to make sure I don’t have any holes – and it wouldn’t surprise me to find some. It doesn’t really matter – this is certainly teaching me Rust! Give it a try at PinpointShooting.com – but don’t be surprised if you shooter account gets deleted, constantly.

Rust Web App Session Management with AWS

Even more libraries added to my web app as I introduce Amazon’s DynamoDB to handle my session storage

Part of a Series: Designing a Full-Featured WebApp with Rust
Part 1: Piecing Together a Rust Web Application
Part 2: My Next Step in Rust Web Application Dev
Part 3: It’s Not a Web Application Without a Database
Part 4: Better Logging for the Web Application
Part 5: Rust Web App Session Management with AWS
Part 6: OAuth Requests, APIs, Diesel, and Sessions
Part 7: Scraping off the Dust: Redeploy of my Rust web app
Part 8: Giving My App Secrets to the AWS SecretManager

Printed security pass - an old school session management practice
Didn’t someone mention a cookie too!? Where’s my cookie!?

I’ve worked on some more bits with my pretend web application I introduced a few posts ago. Thinking about the technology needed to let users register and later log in, session management seems like a good first step to take. I would like to store the sessions in Amazon’s DynamoDB too, just for fun and experience. I’ve pushed several commits to the repository lately, so let me go through some of the changes enabling Rust web app session management with AWS DynamoDB.

First, I searched and read up a bit on best practices for session management. I found a Session Management Cheat Sheet from OWASP. Also, more generically, 12 Best Practices for User Account, Authorization, and Password Management on the Google Cloud Blog. I haven’t addressed many of these yet, but I’ll come back to resources like this several times in the future. I was trying to find language-independent whitepapers or guides on web app session management… and something not a decade old. If anyone has a definitive resource for this, please share!

Rusoto to the Rescue

First, let’s get our web application interfacing with AWS. Rusoto is a big set of crates for using AWS services. Rusoto Core (of course) and DynamoDB are all I need for now, but I am sure I’ll come back for more! To get set up, I created a sessions table in my AWS console and created a programmatic IAM role called pinpointshooting which has full access to the table. When creating a programmatic role, you are given credential access keys which I put into my .env file at the root of the pinpointshooting project. First, connecting to my AWS account with those access keys is a simple matter of:

DynamoDbClient::new(Region::UsEast1)

I had to think about the process I would use to store the session id in a cookie, and when/how it gets created, verified, updated, and deleted. I came up with a simple sketch to make it more clear in my mind. Note that I mention a cache both in my sketch and in some of the comments, but I haven’t done anything about that yet.

  • When a user arrives with no cookie – we need to create a session with some appropriate defaults, write it to the db, and send the cookie in the resulting response.
  • If the user arrives with a cookie – we need to pull the session from the DB (if it really exists) and do some verification and expiration checking. If everything looks good, we pull the session details into our struct. Otherwise, we delete that invalid session data and create a new session like above.
  • When the user selects to log out – we again check to make sure the session is valid, but then delete the session, delete the cookie, and log the user out.



Here are those steps above, as I have them in code at this point!

Step 1: Search for or Create the Session

// Check for sessid cookie and verify session or create new
// session to use - either way, return the session struct
pub fn get_or_setup_session(cookies: &mut Cookies) -> Session {
    let applogger = &LOGGING.logger;
    let dynamodb = connect_dynamodb();

    // if we can pull sessid from a cookie and validate it,
    // pull session from cache or from storage and return
    if let Some(cookie) = cookies.get_private("sessid") {
        debug!(applogger, "Cookie found, verifying";
            "sessid" => cookie.value());

        // verify from dynamodb, update session with last-access if good
        if let Some(mut session) = verify_session_in_ddb(&dynamodb, &cookie.value().to_string()) {
            save_session_to_ddb(&dynamodb, &mut session);
            return session;
        }
    }

    // otherwise, start a new, empty session to use for this user
    let mut hasher = Sha256::new();
    let randstr: String = thread_rng()
        .sample_iter(&Alphanumeric)
        .take(256)
        .collect();
    hasher.input(randstr);
    let sessid = format!("{:x}", hasher.result());

    cookies.add_private(Cookie::new("sessid", sessid.clone()));

    let mut session = Session {
        sessid: sessid.clone(),
        ..Default::default()
    };

    save_session_to_ddb(&dynamodb, &mut session);
    session
}

This will check the (private) cookie sessid, if it came along with the http request, and try to fetch it from DynamoDB and then verify it (see below). If that succeeds, we update the session (with an updated last-access timestamp) and return the session struct. Otherwise, we create a new session id, with some appropriate defaults for now, by getting a hash of a random string of characters. Hrm, I should make sure I didn’t just happen to intersect with an existing sessid at this point! Anyway, we then just add (or update) the sessid cookie to be returned with the http response. Since we are using the private cookie feature of Rocket, the value is actually encrypted. The user can’t see what their real session id is or try to manufacture one. Lastly, if we just created a new session, we save it to the session table and return it.

Step 2: Verify a Session

// Search for sessid in dynamodb and verify session if found
// including to see if it has expired
fn verify_session_in_ddb(dynamodb: &DynamoDbClient, sessid: &String) -> Option<Session> {
    let applogger = &LOGGING.logger;

    let av = AttributeValue {
        s: Some(sessid.clone()),
        ..Default::default()
    };

    let mut key = HashMap::new();
    key.insert("sessid".to_string(), av);

    let get_item_input = GetItemInput {
        table_name: "session".to_string(),
        key: key,
        ..Default::default()
    };

    match dynamodb.get_item(get_item_input).sync() {
        Ok(item_output) => match item_output.item {
            Some(item) => match item.get("session") {
                Some(session) => match &session.s {
                    Some(string) => {
                        let session: Session = serde_json::from_str(&string).unwrap();
                        match session.last_access {
                            Some(last) => {
                                if last > Utc::now() - Duration::minutes(CONFIG.sessions.expire) {
                                    Some(session)
                                } else {
                                    debug!(applogger, "Session expired"; "sessid" => sessid);
                                    delete_session_in_ddb(dynamodb, sessid);
                                    None
                                }
                            }
                            None => {
                                debug!(applogger, "'last_access' is blank for stored session"; "sessid" => sessid);
                                delete_session_in_ddb(dynamodb, sessid);
                                None
                            }
                        }
                    }
                    None => {
                        debug!(applogger, "'session' attribute is empty for stored session"; "sessid" => sessid);
                        delete_session_in_ddb(dynamodb, sessid);
                        None
                    }
                },
                None => {
                    debug!(applogger, "No 'session' attribute found for stored session"; "sessid" => sessid);
                    delete_session_in_ddb(dynamodb, sessid);
                    None
                }
            },
            None => {
                debug!(applogger, "Session not found in dynamodb"; "sessid" => sessid);
                None
            }
        },
        Err(e) => {
            crit!(applogger, "Error in dynamodb"; "err" => e.to_string());
            panic!("Error in dynamodb: {}", e.to_string());
        }
    }
}

That’s some deep nesting – I’m still a newbie at Rust coding, so there is probably a better way to write this. If I didn’t care about logging the problems, I could probably shorten this by using the “?” operator at the end of each step. Maybe I’ll change that eventually, but for now I need to debugging logs to see that things are working.

This chain tries to retrieve the session attribute associated with the sessid (session id encoded in the cookie) and for now, just makes sure it isn’t too old. If that simple check is ok, we deserialize and return that Some(session struct). In all other cases, we return None and a new session will be generated. For cases where a session was present, but determined to be invalid or expired, we also delete the session from DynamoDB.

Step 3: Save (or Update) the Session

// Write current session to dynamodb, update last-access date/time too
fn save_session_to_ddb(dynamodb: &DynamoDbClient, session: &mut Session) {
    let applogger = &LOGGING.logger;

    session.last_access = Some(Utc::now());

    let sessid_av = AttributeValue {
        s: Some(session.sessid.clone()),
        ..Default::default()
    };
    let session_av = AttributeValue {
        s: Some(serde_json::to_string(&session).unwrap()),
        ..Default::default()
    };
    let mut item = HashMap::new();
    item.insert("sessid".to_string(), sessid_av);
    item.insert("session".to_string(), session_av);

    let put_item_input = PutItemInput {
        table_name: "session".to_string(),
        item: item,
        ..Default::default()
    };

    match dynamodb.put_item(put_item_input).sync() {
        Ok(_) => {}
        Err(e) => {
            crit!(applogger, "Error in dynamodb"; "err" => e.to_string());
            panic!("Error in dynamodb: {}", e.to_string());
        }
    };
}

Here, we simply take the session data we are passed, update the last_access to now() and then write the serialized session struct to DynamoDB. Also note, I’m storing the session id inside the session data itself. I’m not sure yet if this is handy duplication, a security concern, or irrelevant.

Step 4: Clean-up After Ourselves

Repairman cleaning up his work, but did anyone check his security badge!?
Hey! Are you a security concern??

Just for completeness, here is the function to drop a session from DynamoDB once we have determined it is invalid (expired).

// Delete session from dynamodb
fn delete_session_in_ddb(dynamodb: &DynamoDbClient, sessid: &String) {
    let applogger = &LOGGING.logger;

    let av = AttributeValue {
        s: Some(sessid.clone()),
        ..Default::default()
    };

    let mut key = HashMap::new();
    key.insert("sessid".to_string(), av);

    let delete_item_input = DeleteItemInput {
        table_name: "session".to_string(),
        key: key,
        ..Default::default()
    };

    match dynamodb.delete_item(delete_item_input).sync() {
        Ok(_) => {
            debug!(applogger, "Deleted invalid session from ddb"; "sessid" => sessid);
        }
        Err(e) => {
            crit!(applogger, "Error in dynamodb"; "err" => e.to_string());
            panic!("Error in dynamodb: {}", e.to_string());
        }
    };
}

Lots, lots, lots more to do – but I’m having fun with this. It might go faster if I actually had a plan for what this web app actually does. I mean, I have an idea, but I haven’t mocked up a single page so I’m going slow and just enjoying hacking out Rust code as I go! Thanks for coming along with me!