Models
Models in loco mean entity classes that allow for easy database querying and writes, but also migrations and seeding.
Sqlite vs Postgres
You might have selected sqlite which is the default when you created your new app. Loco allows you to seamlessly move between sqlite and postgres.
It is typical that you could use sqlite for development, and postgres for production. Some people prefer postgres all the way for both development and production because they use pg specific features. Some people use sqlite for production too, these days. Either way -- all valid choices.
To configure postgres instead of sqlite, go into your config/development.yaml (or production.yaml) and set this, assuming your app is named myapp:
database:
uri: "{{ get_env(name="DATABASE_URL", default="postgres://loco:loco@localhost:5432/myapp_development") }"
loco:loco and a db named myapp_development. For test and production, your DB should be named myapp_test and myapp_production respectively.
For your convenience, here is a docker command to start up a Postgresql database server:
Finally you can also use the doctor command to validate your connection:
)
Fat models, slim controllers
loco models are designed after active record. This means they're a central point in your universe, and every logic or operation your app has should be there.
It means that User::create creates a user but also user.buy(product) will buy a product.
If you agree with that direction you'll get these for free:
- Time-effective testing, because testing your model tests most if not all of your logic and moving parts.
- Ability to run complete app workflows from tasks, or from workers and other places.
- Effectively compose features and use cases by combining models, and nothing else.
- Essentially, models become your app and controllers are just one way to expose your app to the world.
We use SeaORM as the main ORM behind our ActiveRecord abstraction.
- Why not Diesel? - although Diesel has better performance, its macros, and general approach felt incompatible with what we were trying to do
- Why not sqlx - SeaORM uses sqlx under the hood, so the plumbing is there for you to use
sqlxraw if you wish.
Example model
The life of a loco model starts with a migration, then an entity Rust code is generated for you automatically from the database structure:
src/
models/
_entities/ <--- autogenerated code
users.rs <--- the bare entity and helper traits
users.rs <--- your custom activerecord code
Using the users activerecord would be just as you use it under SeaORM see examples here
Adding functionality to the users activerecord is by extension:
Crafting models
The model generator
To add a new model the model generator creates a migration, runs it, and then triggers an entities sync from your database schema which will hydrate and create your model entities.
$ cargo loco generate model posts title:string! content:text user:references
When a model is added via migration, the following default fields are provided:
created_at(ts!): This is a timestamp indicating when your model was created.updated_at(ts!): This is a timestamp indicating when your model was updated.
These fields are ignored if you provide them in your migration command.
Field syntax
Each field type may include either the ! or ^ suffix:
!indicates that the field is required (i.e.NOT NULLin the database),^indicates that the field must be unique.
If no suffix is used, then the field can be null.
Data types
For schema data types, you can use the following mapping to understand the schema:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Loco makes used of references type to define foreign-key relations between the model being generated and the model we wish to refer to. Do note, however, that there are two ways to use this special type:
<other_model>:references<other_model>:references:<column_name>
The first one (<other_model>:references) is used to, as already clear by the semantics, create a foreign-key relation to an already existing model (other_model in this case). However, the field name is implied.
e.g. If we wish to create a new model named post, and it must have a field/column referring to the users table which already exists (in new loco project with migrations applied), we will use the following command:
cargo loco g model post title:string user:references
Using user:references uses the special <other_model>:references type, which will create a relationship between the post (our new model) and a user (pre-existing model), adding a user_id (implied field name) reference field to the posts table.
On the other hand, using the second approach (<other_model>:references:<column_name>) gives us the luxury of being able to name the field/column as per our liking. Therefore, taking the previous example itself, if we wish to create a post table having a title, and a foreign key that points to, perhaps the author, we will use the same previous command, but with a nimble modification:
cargo loco g model post title:string user:references:authored_by
Using user:references:authored_by uses the special <other_model>:references:<column_name> type, which will create a relationship between the post and the user, adding an authored_by (explicit field name) reference field to the posts table, instead of user_id.
You can generate an empty model:
$ cargo loco generate model posts
Or a data model, without any references:
$ cargo loco generate model posts title:string! content:text
Migrations
Other than using the model generator, you drive your schema by creating migrations.
$ cargo loco generate migration <name of migration> [name:type, name:type ...]
This creates a migration in the root of your project in migration/.
You can apply it:
$ cargo loco db migrate
And generate back entities (Rust code) from it:
$ cargo loco db entities
Loco is a migration-first framework, similar to Rails. Which means that when you want to add models, data fields, or model oriented changes - you start with a migration that describes it, and then you apply the migration to get back generated entities in model/_entities.
This enforces everything-as-code, reproducibility and atomicity, where no knowledge of the schema goes missing.
Naming the migration is important, the type of migration that is being generated is inferred from the migration name.
Create a new table
- Name template:
Create___ - Example:
CreatePosts
$ cargo loco g migration CreatePosts title:string content:string
Add columns
- Name template:
Add___To___ - Example:
AddNameAndAgeToUsers(the stringNameAndAgedoes not matter, you specify columns individually, howeverUsersdoes matter because this will be the name of the table)
$ cargo loco g migration AddNameAndAgeToUsers name:string age:int
Remove columns
- Name template:
Remove___From___ - Example:
RemoveNameAndAgeFromUsers(same note exists as in add columns)
$ cargo logo g migration RemoveNameAndAgeFromUsers name:string age:int
Add references
- Name template:
Add___RefTo___ - Example:
AddUserRefToPosts(Userdoes not matter, as you specify one or many references individually,Postsdoes matter as it will be the table name in the migration)
$ cargo loco g migration AddUserRefToPosts user:references
Create a join table
- Name template:
CreateJoinTable___And___(supported between 2 tables) - Example:
CreateJoinTableUsersAndGroups
$ cargo loco g migration CreateJoinTableUsersAndGroups count:int
You can also add some state columns regarding the relationship (such as count here).
Create an empty migration
Use any descriptive name for a migration that does not fall into one of the above patterns to create an empty migration.
$ cargo loco g migration FixUsersTable
Down Migrations
If you realize that you made a mistake, you can always undo the migration. This will undo the changes made by the migration (assuming that you added the appropriate code for down in the migration).
The down command on its own will rollback only the last migration. If you want to rollback multiple migrations, you can specify the number of migrations to rollback.
Verbs, singular and plural
- references: use singular for the table name, and a
<other_model>:referencestype.user:references(referencesUsers),vote:references(referencesVotes).<other_model>:references:<column_name>is also availabletrain:references:departing_train(referencesTrains). - column names: anything you like. Prefer
snake_case. - table names: plural, snake case.
users,draft_posts. - migration names: anything that can be a file name, prefer snake case.
create_table_users,add_vote_id_to_movies. - model names: generated automatically for you. Usually the generated name is pascal case, plural.
Users,UsersVotes.
Here are some examples showcasing the naming conventions:
- model name in plural:
movies - reference director is in singular:
director:references - reference added_by is an explicit name in singular, the referenced model remains singular:
user:references:added_by - column name in snake case:
long_title:string
Authoring migrations
To use the migrations DSL, make sure you have the following loco_rs::schema::* import and SeaORM prelude.
use *;
use *;
Then, create a struct:
;
And then implement your migration (see below).
Create a table
Create a table, provide two arrays: (1) columns (2) references.
Leave references empty to not create any reference fields.
Create a join table
Provide the references to the second array argument. Use an empty string "" to indicate you want us to generate a reference column name for you (e.g. a user reference will imply connecting the users table through a user_id column in group_users).
Provide a non-empty string to indicate a specific name for the reference column name.
Add a column
Add a single column. You can use as many such statements as you like in a single migration (to add multiple columns).
Authoring advanced migrations
Using the manager directly lets you access more advanced operations while authoring your migrations.
Add a column
manager
.alter_table
.await
Drop a column
manager
.alter_table
.await
Add index
You can copy some of this code for adding an index
manager
.create_index
.await;
Create a data fix
Creating a data fix in a migration is easy - just use SQL statements as you like:
async
Having said that, it's up to you to code your data fixes in:
task- where you can use high level modelsmigration- where you can both change structure and fix data stemming from it with raw SQL- or an ad-hoc
playground- where you can use high level models or experiment with things
Validation
We use the validator library under the hood. First, build your validator with the constraints you need, and then implement Validatable for your ActiveModel.
Note that Validatable is how you instruct Loco which Validator to provide and how to build it from a model.
Now you can use user.validate() seamlessly in your code, when it is Ok the model is valid, otherwise you'll find validation errors in Err(...) available for inspection.
Relationships
One to many
Here is how to associate a Company with an existing User model.
$ cargo loco generate model company name:string user:references
This will create a migration with a user_id field in Company which will reference a User.
Many to many
Here is how to create a typical "votes" table, which links a User and a Movie with a many-to-many link table. Note that it uses the special --link flag in the model generator.
Let's create a new Movie entity:
$ cargo loco generate model movies title:string
And now the link table between User (which we already have) and Movie (which we just generated) to record votes:
$ cargo loco generate model --link users_votes user:references movie:references vote:int
..
..
Writing src/models/_entities/movies.rs
Writing src/models/_entities/users.rs
Writing src/models/_entities/mod.rs
Writing src/models/_entities/prelude.rs
... Done.
This will create a many-to-many link table named UsersVotes with a composite primary key containing both user_id and movie_id. Because it has precisely 2 IDs, SeaORM will identify it as a many-to-many link table, and generate entities with the appropriate via() relationship:
// User, newly generated entity with a `via` relation at _entities/users.rs
// ..
Using via() will cause find_related to walk through the link table without you needing to know the details of the link table.
Configuration
Model configuration that's available to you is exciting because it controls all aspects of development, testing, and production, with a ton of goodies, coming from production experience.
database:
# Database connection URI
uri:
# When enabled, the sql query will be logged.
enable_logging: false
# Set the timeout duration when acquiring a connection.
connect_timeout: 500
# Set the idle duration before closing a connection.
idle_timeout: 500
# Minimum number of connections for a pool.
min_connections: 1
# Maximum number of connections for a pool.
max_connections: 1
# Run migration up when application loaded
auto_migrate: true
# Truncate database when application loaded. This is a dangerous operation, make sure that you using this flag only on dev environments or test mode
dangerously_truncate: false
# Recreating schema when application loaded. This is a dangerous operation, make sure that you using this flag only on dev environments or test mode
dangerously_recreate: false
By combining these flags, you can create different experiences to help you be more productive.
You can truncate before an app starts -- which is useful for running tests, or you can recreate the entire DB when the app starts -- which is useful for integration tests or setting up a new environment. In production, you want these turned off (hence the "dangerously" part).
Seeding
Loco comes equipped with a convenient seeds feature, streamlining the process for quick and easy database reloading. This functionality proves especially invaluable during frequent resets in development and test environments. Let's explore how to get started with this feature:
Creating a new seed
1. Creating a new seed file
Navigate to src/fixtures and create a new seed file. For instance:
src/
fixtures/
users.yaml
In this yaml file, enlist a set of database records for insertion. Each record should encompass the mandatory database fields, based on your database constraints. Optional values are at your discretion. Suppose you have a database DDL like this:
(
id serial4 NOT NULL,
email varchar NOT NULL,
"password" varchar NOT NULL,
reset_token varchar NULL,
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT users_email_key UNIQUE (email),
CONSTRAINT users_pkey PRIMARY KEY (id)
);
The mandatory fields include id, password, email, and created_at. The reset token can be left empty. Your migration content file should resemble the following:
---
- id: 1
email: user1@example.com
password: "$2b$12$gf4o2FShIahg/GY6YkK2wOcs8w4.lu444wP6BL3FyjX0GsxnEV6ZW"
created_at: "2023-11-12T12:34:56.789"
- id: 2
pid: 22222222-2222-2222-2222-222222222222
email: user2@example.com
reset_token: "SJndjh2389hNJKnJI90U32NKJ"
password: "$2b$12$gf4o2FShIahg/GY6YkK2wOcs8w4.lu444wP6BL3FyjX0GsxnEV6ZW"
created_at: "2023-11-12T12:34:56.789"
Connect the seed
Integrate your seed into the app's Hook implementations by following these steps:
- Navigate to your app's Hook implementations.
- Add the seed within the seed function implementation. Here's an example in Rust:
This implementation ensures that the seed is executed when the seed function is called. Adjust the specifics based on your application's structure and requirements.
Managing Seed via CLI
- Reset the Database
Clear all existing data before importing seed files. This is useful when you want to start with a fresh database state, ensuring no old data remains. - Dump Database Tables to Files
Export the contents of your database tables to files. This feature allows you to back up the current state of your database or prepare data for reuse across environments.
To access the seed commands, use the following CLI structure:
)
Using a Test
-
Enable the testing feature (
testing) -
In your test section, follow the example below:
use *;
async
Multi-DB
Loco enables you to work with more than one database and share instances across your application.
Extra DB
To set up an additional database, begin with database connections and configuration. The recommended approach is to navigate to your configuration file and add the following under settings:
initializers:
extra_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
Load this initializer into initializers hook like this example
async
Now, you can use the secondary database in your controller:
use DatabaseConnection;
use ;
pub async
Multi-DB (multi-tenant)
To connect more than two different databases, the database configuration should look like this:
initializers:
multi_db:
secondary_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
third_db:
uri: postgres://loco:loco@localhost:5432/loco_app
enable_logging: false
connect_timeout: 500
idle_timeout: 500
min_connections: 1
max_connections: 1
auto_migrate: true
dangerously_truncate: false
dangerously_recreate: false
Next load this initializer into initializers hook like this example
async
Now, you can use the multiple databases in your controller:
use DatabaseConnection;
use ;
use MultiDb;
pub async
Testing
If you used the generator to crate a model migration, you should also have an auto generated model test in tests/models/posts.rs (remember we generated a model named post?)
A typical test contains everything you need to set up test data, boot the app, and reset the database automatically before the testing code runs. It looks like this:
use *;
async
To simplify the testing process, Loco provides helpful functions that make writing tests more convenient. Ensure you enable the testing feature in your Cargo.toml:
[]
= { = "*", = ["testing"] }
Database cleanup
In some cases, you may want to run tests with a clean dataset, ensuring that each test is independent of others and not affected by previous data. To enable this feature, modify the dangerously_truncate option to true in the config/test.yaml file under the database section. This setting ensures that Loco truncates all data before each test that implements the boot app.
⚠️ Caution: Be cautious when using this feature to avoid unintentional data loss, especially in a production environment.
- When doing it recommended to run all the relevant task in with serial crate.
- To decide which tables you want to truncate, add the entity model to the App hook:
;
Async
When writing async tests with database data, it's important to ensure that one test does not affect the data used by other tests. Since async tests can run concurrently on the same database dataset, this can lead to unstable test results.
Instead of using boot_test, as described in the documentation for synchronous tests, use the boot_test_with_create_db function. This function generates a random database schema name and ensures that the tables are deleted once the test is completed.
Note: If you cancel the test run midway (e.g., by pressing Ctrl + C), the cleanup process will not execute, and the database tables will remain. In such cases, you will need to manually remove them.
use *;
async
Seeding
use *;
async
This documentation provides an in-depth guide on leveraging Loco's testing helpers, covering database cleanup, data cleanup for snapshot testing, and seeding data for tests.
Snapshot test data cleanup
Snapshot testing often involves comparing data structures with dynamic fields such as created_date, id, pid, etc. To ensure consistent snapshots, Loco defines a list of constant data with regex replacements. These replacements can replace dynamic data with placeholders.
Example using insta for snapshots.
in the following example you can use cleanup_user_model which clean all user model data.
use *;
async
You can also use cleanup constants directly, starting with CLEANUP_.
Customizing Entity Generation
You can customize how sea-orm-cli generates entities by adding configuration to your Cargo.toml under the [package.metadata.db.entity] section. For example:
[]
= 1
= "table1,table2"
= "CustomDerive"
This configuration will be passed as flags to sea-orm-cli generate entity when running cargo loco db entities.
Note that some flags like --output-dir and --database-url cannot be overridden as they are managed by Loco.