Models
Models in loco
mean entity classes that allow for easy database querying and writes, but also migrations and seeding.
Sqlite vs Postgres
You might have selected sqlite
which is the default when you created your new app. Loco allows you to seamlessly move between sqlite
and postgres
.
It is typical that you could use sqlite
for development, and postgres
for production. Some people prefer postgres
all the way for both development and production because they use pg
specific features. Some people use sqlite
for production too, these days. Either way -- all valid choices.
To configure postgres
instead of sqlite
, go into your config/development.yaml
(or production.yaml
) and set this, assuming your app is named myapp
:
database:
uri: "{{ get_env(name="DATABASE_URL", default="postgres://loco:loco@localhost:5432/myapp_development") }"
loco:loco
and a db named myapp_development
. For test and production, your DB should be named myapp_test
and myapp_production
respectively.
For your convenience, here is a docker command to start up a Postgresql database server:
Finally you can also use the doctor command to validate your connection:
)
Fat models, slim controllers
loco
models are designed after active record. This means they're a central point in your universe, and every logic or operation your app has should be there.
It means that User::create
creates a user but also user.buy(product)
will buy a product.
If you agree with that direction you'll get these for free:
- Time-effective testing, because testing your model tests most if not all of your logic and moving parts.
- Ability to run complete app workflows from tasks, or from workers and other places.
- Effectively compose features and use cases by combining models, and nothing else.
- Essentially, models become your app and controllers are just one way to expose your app to the world.
We use SeaORM
as the main ORM behind our ActiveRecord abstraction.
- Why not Diesel? - although Diesel has better performance, its macros, and general approach felt incompatible with what we were trying to do
- Why not sqlx - SeaORM uses sqlx under the hood, so the plumbing is there for you to use
sqlx
raw if you wish.
Example model
The life of a loco
model starts with a migration, then an entity Rust code is generated for you automatically from the database structure:
src/
models/
_entities/ <--- autogenerated code
users.rs <--- the bare entity and helper traits
users.rs <--- your custom activerecord code
Using the users
activerecord would be just as you use it under SeaORM see examples here
Adding functionality to the users
activerecord is by extension:
Crafting models
Migrations
To add a new model you have to use a migration.
$ cargo loco generate model posts title:string! content:text user:references
When a model is added via migration, the following default fields are provided:
created_at
(ts!): This is a timestamp indicating when your model was created.updated_at
(ts!): This is a timestamp indicating when your model was updated.
These fields are ignored if you provide them in your migration command. In addition, create_at
and update_at
fields are also ignored if provided.
For schema data types, you can use the following mapping to understand the schema:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Using user:references
uses the special references
type, which will create a relationship between a post
and a user
, adding a user_id
reference field to the posts
table.
You can generate an empty model:
$ cargo loco generate model posts
You can generate an empty model migration only which means migrations will not run automatically:
$ cargo loco generate model --migration-only posts
Or a data model, without any references:
$ cargo loco generate model posts title:string! content:text
This creates a migration in the root of your project in migration/
.
You can now apply it:
$ cargo loco db migrate
And generate back entities (Rust code) from it:
$ cargo loco db entities
Loco is a migration-first framework, similar to Rails. Which means that when you want to add models, data fields, or model oriented changes - you start with a migration that describes it, and then you apply the migration to get back generated entities in model/_entities
.
This enforces everything-as-code, reproducibility and atomicity, where no knowledge of the schema goes missing.
Down Migrations
If you realize that you made a mistake, you can always undo the migration. This will undo the changes made by the migration (assuming that you added the appropriate code for down
in the migration).
The down
command on its own will rollback only the last migration. If you want to rollback multiple migrations, you can specify the number of migrations to rollback.
Verbs, singular and plural
- references: use singular for the table name, and a
:references
type.user:references
(referencesUsers
),vote:references
(referencesVotes
) - column names: anything you like. Prefer
snake_case
- table names: plural, snake case.
users
,draft_posts
. - migration names: anything that can be a file name, prefer snake case.
create_table_users
,add_vote_id_to_movies
- model names: generated automatically for you. Usually the generated name is pascal case, plural.
Users
,UsersVotes
Here are some examples showcasing the naming conventions:
- model name in plural:
movies
- reference user is in singular:
user:references
- column name in snake case:
long_title:string
Naming migrations
There are no rules for how to name migrations, but here's a few guidelines to keep your migration stack readable as a list of files:
<table>
- create a table, plural,movies
add_<table>_<field>
- add a column,add_users_email
index_<table>_<field>
- add an index,index_users_email
alter_
- change a schema,alter_users
delete_<table>_<field>
- remove a column,delete_users_email
data_fix_
- fix some data, using entity queries or raw SQL,data_fix_users_timezone_issue_315
Example:
Add or remove a column
Adding a column:
manager
.alter_table
.await
Dropping a column:
manager
.alter_table
.await
Add index
You can copy some of this code for adding an index
manager
.create_index
.await;
Create a data fix
Creating a data fix in a migration is easy - just use SQL statements as you like:
async
Having said that, it's up to you to code your data fixes in:
task
- where you can use high level modelsmigration
- where you can both change structure and fix data stemming from it with raw SQL- or an ad-hoc
playground
- where you can use high level models or experiment with things
Validation
We use the validator library under the hood. First, build your validator with the constraints you need, and then implement Validatable
for your ActiveModel
.
Note that Validatable
is how you instruct Loco which Validator
to provide and how to build it from a model.
Now you can use user.validate()
seamlessly in your code, when it is Ok
the model is valid, otherwise you'll find validation errors in Err(...)
available for inspection.
Relationships
One to many
Here is how to associate a Company
with an existing User
model.
$ cargo loco generate model company name:string user:references
This will create a migration with a user_id
field in Company
which will reference a User
.
Many to many
Here is how to create a typical "votes" table, which links a User
and a Movie
with a many-to-many link table. Note that it uses the special --link
flag in the model generator.
Let's create a new Movie
entity:
$ cargo loco generate model movies title:string
And now the link table between User
(which we already have) and Movie
(which we just generated) to record votes:
$ cargo loco generate model --link users_votes user:references movie:references vote:int
..
..
Writing src/models/_entities/movies.rs
Writing src/models/_entities/notes.rs
Writing src/models/_entities/users.rs
Writing src/models/_entities/mod.rs
Writing src/models/_entities/prelude.rs
... Done.
This will create a many-to-many link table named UsersVotes
with a composite primary key containing both user_id
and movie_id
. Because it has precisely 2 IDs, SeaORM will identify it as a many-to-many link table, and generate entities with the appropriate via()
relationship:
// User, newly generated entity with a `via` relation at _entities/users.rs
// ..
Using via()
will cause find_related
to walk through the link table without you needing to know the details of the link table.
Configuration
Model configuration that's available to you is exciting because it controls all aspects of development, testing, and production, with a ton of goodies, coming from production experience.
database:
# Database connection URI
uri:
# When enabled, the sql query will be logged.
enable_logging: false
# Set the timeout duration when acquiring a connection.
connect_timeout: 500
# Set the idle duration before closing a connection.
idle_timeout: 500
# Minimum number of connections for a pool.
min_connections: 1
# Maximum number of connections for a pool.
max_connections: 1
# Run migration up when application loaded
auto_migrate: true
# Truncate database when application loaded. This is a dangerous operation, make sure that you using this flag only on dev environments or test mode
dangerously_truncate: false
# Recreating schema when application loaded. This is a dangerous operation, make sure that you using this flag only on dev environments or test mode
dangerously_recreate: false
By combining these flags, you can create different experiences to help you be more productive.
You can truncate before an app starts -- which is useful for running tests, or you can recreate the entire DB when the app starts -- which is useful for integration tests or setting up a new environment. In production, you want these turned off (hence the "dangerously" part).
Seeding
Loco
has a built-in 'seeds' feature that makes the process quick and easy. This is especially useful when reloading the database frequently in development and test environments. It's easy to get started with this feature
Loco
comes equipped with a convenient seeds
feature, streamlining the process for quick and easy database reloading. This functionality proves especially invaluable during frequent resets in development and test environments. Let's explore how to get started with this feature:
Creating a new seed
1. Creating a new seed file
Navigate to src/fixtures
and create a new seed file. For instance:
src/
fixtures/
users.yaml
In this yaml file, enlist a set of database records for insertion. Each record should encompass the mandatory database fields, based on your database constraints. Optional values are at your discretion. Suppose you have a database DDL like this:
(
id serial4 NOT NULL,
email varchar NOT NULL,
"password" varchar NOT NULL,
reset_token varchar NULL,
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT users_email_key UNIQUE (email),
CONSTRAINT users_pkey PRIMARY KEY (id)
);
The mandatory fields include id
, password
, email
, and created_at
. The reset token can be left empty. Your migration content file should resemble the following:
---
- id: 1
email: user1@example.com
password: "$2b$12$gf4o2FShIahg/GY6YkK2wOcs8w4.lu444wP6BL3FyjX0GsxnEV6ZW"
created_at: "2023-11-12T12:34:56.789"
- id: 2
pid: 22222222-2222-2222-2222-222222222222
email: user2@example.com
reset_token: "SJndjh2389hNJKnJI90U32NKJ"
password: "$2b$12$gf4o2FShIahg/GY6YkK2wOcs8w4.lu444wP6BL3FyjX0GsxnEV6ZW"
created_at: "2023-11-12T12:34:56.789"
Connect the seed
Integrate your seed into the app's Hook implementations by following these steps:
- Navigate to your app's Hook implementations.
- Add the seed within the seed function implementation. Here's an example in Rust:
This implementation ensures that the seed is executed when the seed function is called. Adjust the specifics based on your application's structure and requirements.
Running seeds
The seed process is not executed automatically. You can trigger the seed process either through a task or during testing.
Using a Task
- Create a seeding task by following the instructions in the Task Documentation.
- Configure the task to execute the
seed
function, as demonstrated in the example below:
use BTreeMap;
use async_trait;
use ;
use EntityTrait;
use crate::;
;
Using a Test
-
Enable the testing feature (
testing
) -
In your test section, follow the example below:
async
Multi-DB
Loco
enables you to work with more than one database and share instances across your application.
To set up an additional database, begin with database connections and configuration. The recommended approach is to navigate to your configuration file and add the following under settings:
settings:
extra_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
After configuring the database, import loco-extras and enable the initializer-extra-db
feature in your Cargo.toml:
= { = "*", = ["initializer-extra-db"] }
Next load this initializer into initializers
hook like this example
async
Now, you can use the secondary database in your controller:
use DatabaseConnection;
use ;
pub async
Configuring
To connect more than two different databases, load the feature initializer-multi-db
in loco-extras:
= { = "*", = ["initializer-multi-db"] }
The database configuration should look like this:
settings:
multi_db:
secondary_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
third_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
Next load this initializer into initializers
hook like this example
async
Using in controllers
Now, you can use the multiple databases in your controller:
use DatabaseConnection;
use ;
use MultiDb;
pub async
Testing
If you used the generator to crate a model migration, you should also have an auto generated model test in tests/models/posts.rs
(remember we generated a model named post
?)
A typical test contains everything you need to set up test data, boot the app, and reset the database automatically before the testing code runs. It looks like this:
async
To simplify the testing process, Loco
provides helpful functions that make writing tests more convenient. Ensure you enable the testing feature in your Cargo.toml
:
[]
= { = "*", = ["testing"] }
Database cleanup
In some cases, you may want to run tests with a clean dataset, ensuring that each test is independent of others and not affected by previous data. To enable this feature, modify the dangerously_truncate
option to true in the config/test.yaml
file under the database section. This setting ensures that Loco truncates all data before each test that implements the boot app.
⚠️ Caution: Be cautious when using this feature to avoid unintentional data loss, especially in a production environment.
- When doing it recommended to run all the relevant task in with serial crate.
- To decide which tables you want to truncate, add the entity model to the App hook:
;
Seeding
async
This documentation provides an in-depth guide on leveraging Loco's testing helpers, covering database cleanup, data cleanup for snapshot testing, and seeding data for tests.
Snapshot test data cleanup
Snapshot testing often involves comparing data structures with dynamic fields such as created_date
, id
, pid
, etc. To ensure consistent snapshots, Loco defines a list of constant data with regex replacements. These replacements can replace dynamic data with placeholders.
Example using insta for snapshots.
in the following example you can use cleanup_user_model
which clean all user model data.
async
You can also use cleanup constants directly, starting with CLEANUP_
.