Your Project

Driving development with cargo loco

Create your starter app:

 loco new
 ❯ App name? · myapp
 ❯ What would you like to build? · Saas App with client side rendering
 ❯ Select a DB Provider · Sqlite
 ❯ Select your background worker type · Async (in-process tokio async tasks)

🚂 Loco app generated successfully in:
myapp/

- assets: You've selected `clientside` for your asset serving configuration.

Next step, build your frontend:
  $ cd frontend/
  $ npm install && npm run build

Now cd into your app and try out the various commands:

cargo loco --help
cd ./examples/demo && cargo loco --help

You can now drive your development through the CLI:

$ cargo loco generate model posts
$ cargo loco generate controller posts
$ cargo loco db migrate
$ cargo loco start

And running tests or working with Rust is just as you already know:

$ cargo build
$ cargo test

Starting your app

To run you app, run:

cargo loco start

Background workers

Based on your configuration (in config/), your workers will know how to operate:

workers:
  # requires Redis
  mode: BackgroundQueue

  # can also use:
  # ForegroundBlocking - great for testing
  # BackgroundAsync - for same-process jobs, using tokio async

And now, you can run the actual process in various ways:

  • rr start --worker - run only a worker and process background jobs. This is great for scale. Run one service app with rr start, and then run many process based workers with rr start --worker distributed on any machine you want.
  • rr start --server-and-worker - will run both a service and a background worker processor in the same unix process. It uses Tokio for executing background jobs. This is great for those cases when you want to run on a single server without too much of an expense or have constrained resources.

Getting your app version

Because your app is compiled, and then copied to production, Loco gives you two important operability pieces of information:

  • Which version is this app, and which GIT SHA was it built from? cargo loco version
  • Which Loco version was this app compiled against? cargo loco --version

Both version strings are parsable and stable so you can use it in integration scripts, monitoring tools and so on.

You can shape your own custom app versioning scheme by overriding the app_version hook in your src/app.rs file.

Using the scaffold generator

Scaffolding is an efficient and speedy method for generating key components of an application. By utilizing scaffolding, you can create models, views, and controllers for a new resource all in one go.

See scaffold command:

cd ./examples/demo && cargo loco generate scaffold --help

You can begin by generating a scaffold for the Post resource, which will represent a single blog posting. To accomplish this, open your terminal and enter the following command:

cargo loco generate scaffold posts name:string title:string content:text --api

The scaffold generate command support API, HTML or HTMX by adding --template flag to scaffold command.

Scaffold file layout

The scaffold generator will build several files in your application:

FilePurpose
migration/src/lib.rsInclude Post migration.
migration/src/m20240606_102031_posts.rsPosts migration.
src/app.rsAdding Posts to application router.
src/controllers/mod.rsInclude the Posts controller.
src/controllers/posts.rsThe Posts controller.
tests/requests/posts.rsFunctional testing.
src/models/mod.rsIncluding Posts model.
src/models/posts.rsPosts model,
src/models/_entities/mod.rsIncludes Posts Sea-orm entity model.
src/models/_entities/posts.rsSea-orm entity model.
src/views/mod.rsIncluding Posts views. only for HTML and HTMX templates.
src/views/posts.rsPosts template generator. only for HTML and HTMX templates.
assets/views/posts/create.htmlCreate post template. only for HTML and HTMX templates.
assets/views/posts/edit.htmlEdit post template. only for HTML and HTMX templates.
assets/views/posts/edit.htmlEdit post template. only for HTML and HTMX templates.
assets/views/posts/list.htmlList post template. only for HTML and HTMX templates.
assets/views/posts/show.htmlShow post template. only for HTML and HTMX templates.

Your app configuration

Configuration in loco lives in config/ and by default sets up 3 different environments:

config/
  development.yaml
  production.yaml
  test.yaml

An environment is picked up automatically based on:

  • A command line flag: cargo loco start --environment production, if not given, fallback to
  • LOCO_ENV or RAILS_ENV or NODE_ENV

When nothing is given, the default value is development.

The Loco framework allows support for custom environments in addition to the default environment. To add a custom environment, create a configuration file with a name matching the environment identifier used in the preceding example.

Placeholders / variables in config

It is possible to inject values into a configuration file. In this example, we get a port value from the NODE_PORT environment variable:

# config/development.yaml
# every configuration file is a valid Tera template
server:
  # Port on which the server will listen. the server binding is 0.0.0.0:{PORT}
  port:  {{ get_env(name="NODE_PORT", default=5150) }}
  # The UI hostname or IP address that mailers will point to.
  host: http://localhost
  # Out of the box middleware configuration. to disable middleware you can changed the `enable` field to `false` of comment the middleware block

The get_env function is part of the Tera template engine. Refer to the Tera docs to see what more you can use.

Example

Suppose you want to add a 'qa' environment. Create a qa.yaml file in the config folder:

config/
  development.yaml
  production.yaml
  test.yaml
  qa.yaml

To run the application using the 'qa' environment, execute the following command:

LOCO_ENV=qa cargo loco start

Settings

The configuration files contain knobs to set up your Loco app. You can also have your custom settings, with the settings: section. in config/development.yaml add the settings: section

settings:
  allow_list:
    - google.com
    - apple.com

These setting will appear in ctx.config.settings as serde_json::Value. You can create your strongly typed settings by adding a struct:

// put this in src/common/settings.rs
#[derive(Serialize, Deserialize, Default, Debug)]
pub struct Settings {
    pub allow_list: Option<Vec<String>>,
}

impl Settings {
    pub fn from_json(value: &serde_json::Value) -> Result<Self> {
        Ok(serde_json::from_value(value.clone())?)
    }
}

Then, you can access settings from anywhere like this:

// in controllers, workers, tasks, or elsewhere,
// as long as you have access to AppContext (here: `ctx`)

if let Some(settings) = &ctx.config.settings {
    let settings = common::settings::Settings::from_json(settings)?;
    println!("allow list: {:?}", settings.allow_list);
}

Server

Here is a detailed description of the interface (listening, etc.) parameters under server::

  • port: as the name says, for changing ports, mostly when behind a load balancer, etc.

  • binding: for changing what the IP interface "binds" to, mostly, when you are behind a load balancer like ngnix you bind to a local address (when the LB is also there). However you can also bind to "world" (0.0.0.0). You can set the binding: field via config, or via the CLI (using the -b flag) -- which is what Rails is doing.

  • host: - for "visibility" use cases or out-of-band use cases. For example, some times you want to display the current server host (in terms of domain name, etc.), which serves for visibility. And some times, as in the case of emails -- your server address is "out of band", meaning when I open my gmail account and I have your email -- I have to click what looks like your external address or visible address (official domain name, etc), and not an internal "host" address which is what may be the wrong thing to do (imagine an email link pointing to "http://127.0.0.1/account/verify")

Logger

Other than the commented fields in the logger: section on your YAML file, here's some more context:

  • logger.pretty_backtrace - will display colorful backtrace without noise for great development experience. Note that this forcefully sets RUST_BACKTRACE=1 into the process' env, which enables a (costly) backtrace capture on specific errors. Enable this in development, disable it in production. When needed in production, use RUST_BACKTRACE=1 ad-hoc in the command line to show it.

For all available configuration options click here