Skip to main content

Introducing sqlcomment - Database Performance Analysis with Ent and Google's Sqlcommenter

Ent is a powerful Entity framework that helps developers write neat code that is translated into (possibly complex) database queries. As the usage of your application grows, it doesnโ€™t take long until you stumble upon performance issues with your database. Troubleshooting database performance issues is notoriously hard, especially when youโ€™re not equipped with the right tools.

The following example shows how Ent query code is translated into an SQL query.

ent example 1

Example 1 - ent code is translated to SQL query

Traditionally, it has been very difficult to correlate between poorly performing database queries and the application code that is generating them. Database performance analysis tools could help point out slow queries by analyzing database server logs, but how could they be traced back to the application?

Sqlcommenter#

Earlier this year, Google introduced Sqlcommenter. Sqlcommenter is

an open source library that addresses the gap between the ORM libraries and understanding database performance. Sqlcommenter gives application developers visibility into which application code is generating slow queries and maps application traces to database query plansIn other words, Sqlcommenter adds application context metadata to SQL queries. This information can then be used to provide meaningful insights. It does so by adding [SQL comments](https://en.wikipedia.org/wiki/SQL_syntax#Comments) to the query that carry metadata but are ignored by the database during query execution. For example, the following query contains a comment that carries metadata about the application that issued it (`users-mgr`), which controller and route triggered it (`users` and `user_rename`, respectively), and the database driver that was used (`ent:v0.9.1`):
update users set username = โ€˜hedwigzโ€™ where id = 88
/*application='users-mgr',controller='users',route='user_rename',db_driver='ent:v0.9.1'*/

To get a taste of how the analysis of metadata collected from Sqlcommenter metadata can help us better understand performance issues of our application, consider the following example: Google Cloud recently launched Cloud SQL Insights, a cloud-based SQL performance analysis product. In the image below, we see a screenshot from the Cloud SQL Insights Dashboard that shows that the HTTP route 'api/users' is causing many locks on the database. We can also see that this query got called 16,067 times in the last 6 hours.

Cloud SQL insights

Screenshot from Cloud SQL Insights Dashboard

This is the power of SQL tags - they provide you correlation between your application-level information and your Database monitors.

sqlcomment#

sqlcomment is an Ent driver that adds metadata to SQL queries using comments following the sqlcommenter specification. By wrapping an existing Ent driver with sqlcomment, users can leverage any tool that supports the standard to triage query performance issues. Without further ado, letโ€™s see sqlcomment in action.

First, to install sqlcomment run:

go get ariga.io/sqlcomment

sqlcomment is wrapping an underlying SQL driver, therefore, we need to open our SQL connection using entโ€™s sql module, instead of Ent's popular helper ent.Open.

Make sure to import entgo.io/ent/dialect/sql in the following snippet :::
// Create db driver.
db, err := sql.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("Failed to connect to database: %v", err)
}
// Create sqlcomment driver which wraps sqlite driver.
drv := sqlcomment.NewDriver(db,
sqlcomment.WithDriverVerTag(),
sqlcomment.WithTags(sqlcomment.Tags{
sqlcomment.KeyApplication: "my-app",
sqlcomment.KeyFramework: "net/http",
}),
)
// Create and configure ent client.
client := ent.NewClient(ent.Driver(drv))

Now, whenever we execute a query, sqlcomment will suffix our SQL query with the tags we set up. If we were to run the following query:

client.User.
Update().
Where(
user.Or(
user.AgeGT(30),
user.Name("bar"),
),
user.HasFollowers(),
).
SetName("foo").
Save()

Ent would output the following commented SQL query:

UPDATE `users`
SET `name` = ?
WHERE (
`users`.`age` > ?
OR `users`.`name` = ?
)
AND `users`.`id` IN (
SELECT `user_following`.`follower_id`
FROM `user_following`
)
/*application='my-app',db_driver='ent:v0.9.1',framework='net%2Fhttp'*/

As you can see, Ent outputted an SQL query with a comment at the end, containing all the relevant information associated with that query.

sqlcomment supports more tags, and has integrations with OpenTelemetry and OpenCensus. To see more examples and scenarios, please visit the github repo.

Wrapping-Up#

In this post I showed how adding metadata to queries using SQL comments can help correlate between source code and database queries. Next, I introduced sqlcomment - an Ent driver that adds SQL tags to all of your queries. Finally, I got to see sqlcomment in action, by installing and configuring it with Ent. If you like the code and/or want to contribute - feel free to checkout the project on GitHub.

Have questions? Need help with getting started? Feel free to join our Slack channel.

:::

Announcing entcache - a Cache Driver for Ent

While working on Ariga's operational data graph query engine, we saw the opportunity to greatly improve the performance of many use cases by building a robust caching library. As heavy users of Ent, it was only natural for us to implement this layer as an extension to Ent. In this post, I will briefly explain what caches are, how they fit into software architectures, and present entcache - a cache driver for Ent.

Caching is a popular strategy for improving application performance. It is based on the observation that the speed for retrieving data using different types of media can vary within many orders of magnitude. Jeff Dean famously presented the following numbers in a lecture about "Software Engineering Advice from Building Large-Scale Distributed Systems":

cache numbers

These numbers show things that experienced software engineers know intuitively: reading from memory is faster than reading from disk, retrieving data from the same data center is faster than going out to the internet to fetch it. We add to that, that some calculations are expensive and slow, and that fetching a precomputed result can be much faster (and less expensive) than recomputing it every time.

The collective intelligence of Wikipedia tells us that a Cache is "a hardware or software component that stores data so that future requests for that data can be served faster". In other words, if we can store a query result in RAM, we can fulfill a request that depends on it much faster than if we need to go over the network to our database, have it read data from disk, run some computation on it, and only then send it back to us (over a network).

However, as software engineers, we should remember that caching is a notoriously complicated topic. As the phrase coined by early-day Netscape engineer Phil Karlton says: "There are only two hard things in Computer Science: cache invalidation and naming things". For instance, in systems that rely on strong consistency, a cache entry may be stale, therefore causing the system to behave incorrectly. For this reason, take great care and pay attention to detail when you are designing caches into your system architectures.

Presenting entcache#

The entcache package provides its users with a new Ent driver that can wrap one of the existing SQL drivers available for Ent. On a high level, it decorates the Query method of the given driver, and for each call:

  1. Generates a cache key (i.e. hash) from its arguments (i.e. statement and parameters).

  2. Checks the cache to see if the results for this query are already available. If they are (this is called a cache-hit), the database is skipped and results are returned to the caller from memory.

  3. If the cache does not contain an entry for the query, the query is passed to the database.

  4. After the query is executed, the driver records the raw values of the returned rows (sql.Rows), and stores them in the cache with the generated cache key.

The package provides a variety of options to configure the TTL of the cache entries, control the hash function, provide custom and multi-level cache stores, evict and skip cache entries. See the full documentation in https://pkg.go.dev/ariga.io/entcache.

As we mentioned above, correctly configuring caching for an application is a delicate task, and so entcache provides developers with different caching levels that can be used with it:

  1. A context.Context-based cache. Usually, attached to a request and does not work with other cache levels. It is used to eliminate duplicate queries that are executed by the same request.

  2. A driver-level cache used by the ent.Client. An application usually creates a driver per database, and therefore, we treat it as a process-level cache.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different process.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that composed of an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Let's demonstrate this by explaining the context.Context based cache.

Context-Level Cache#

The ContextLevel option configures the driver to work with a context.Context level cache. The context is usually attached to a request (e.g. *http.Request) and is not available in multi-level mode. When this option is used as a cache store, the attached context.Context carries an LRU cache (can be configured differently), and the driver stores and searches entries in the LRU cache when queries are executed.

This option is ideal for applications that require strong consistency, but still want to avoid executing duplicate database queries on the same request. For example, given the following GraphQL query:

query($ids: [ID!]!) {
nodes(ids: $ids) {
... on User {
id
name
todos {
id
owner {
id
name
}
}
}
}
}

A naive solution for resolving the above query will execute, 1 for getting N users, another N queries for getting the todos of each user, and a query for each todo item for getting its owner (read more about the N+1 Problem).

However, Ent provides a unique approach for resolving such queries(read more in Ent website) and therefore, only 3 queries will be executed in this case. 1 for getting N users, 1 for getting the todo items of all users, and 1 query for getting the owners of all todo items.

With entcache, the number of queries may be reduced to 2, as the first and last queries are identical (see code example).

context-level-cache

The different levels are explained in depth in the repository README.

Getting Started#

If you are not familiar with how to set up a new Ent project, complete Ent Setting Up tutorial first.

First, go get the package using the following command.

go get ariga.io/entcache

After installing entcache, you can easily add it to your project with the snippet below:

// Open the database connection.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatal("opening database", err)
}
// Decorates the sql.Driver with entcache.Driver.
drv := entcache.NewDriver(db)
// Create an ent.Client.
client := ent.NewClient(ent.Driver(drv))
// Tell the entcache.Driver to skip the caching layer
// when running the schema migration.
if client.Schema.Create(entcache.Skip(ctx)); err != nil {
log.Fatal("running schema migration", err)
}
// Run queries.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}
// The query below is cached.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}

To see more advanced examples, head over to the repo's examples directory.

Wrapping Up#

In this post, I presented โ€œentcacheโ€ a new cache driver for Ent that I developed while working on Ariga's Operational Data Graph query engine. We started the discussion by briefly mentioning the motivation for including caches in software systems. Following that, we described the features and capabilities of entcache and concluded with a short example of how you can set it up in your application.

There are a few features we are working on, and wish to work on, but need help from the community to design them properly (solving cache invalidation, anyone? ;)). If you are interested to contribute, reach out to me on the Ent Slack channel.

For more Ent news and updates:

Generating Ent Schemas from Existing SQL Databases

A few months ago the Ent project announced the Schema Import Initiative, its goal is to help support many use cases for generating Ent schemas from external resources. Today, I'm happy to share a project Iโ€™ve been working on: entimport - an importent (pun intended) command line tool designed to create Ent schemas from existing SQL databases. This is a feature that has been requested by the community for some time, so I hope many people find it useful. It can help ease the transition of an existing setup from another language or ORM to Ent. It can also help with use cases where you would like to access the same data from different platforms (such as to automatically sync between them).
The first version supports both MySQL and PostgreSQL databases, with some limitations described below. Support for other relational databases such as SQLite is in the works.

Getting Started#

To give you an idea of how entimport works, I want to share a quick example of end to end usage with a MySQL database. On a high-level, this is what weโ€™re going to do:

  1. Create a Database and Schema - we want to show how entimport can generate an Ent schema for an existing database. We will first create a database, then define some tables in it that we can import into Ent.
  2. Initialize an Ent Project - we will use the Ent CLI to create the needed directory structure and an Ent schema generation script.
  3. Install entimport
  4. Run entimport against our demo database - next, we will import the database schema that weโ€™ve created into our Ent project.
  5. Explain how to use Ent with our generated schemas.

Let's get started.

Create a Database#

Weโ€™re going to start by creating a database. The way I prefer to do it is to use a Docker container. We will use a docker-compose which will automatically pass all needed parameters to the MySQL container.

Start the project in a new directory called entimport-example. Create a file named docker-compose.yaml and paste the following content inside:

version: "3.7"
services:
mysql8:
platform: linux/amd64
image: mysql
environment:
MYSQL_DATABASE: entimport
MYSQL_ROOT_PASSWORD: pass
healthcheck:
test: mysqladmin ping -ppass
ports:
- "3306:3306"

This file contains the service configuration for a MySQL docker container. Run it with the following command:

docker-compose up -d

Next, we will create a simple schema. For this example we will use a relation between two entities:

  • User
  • Car

Connect to the database using MySQL shell, you can do it with the following command:

Make sure you run it from the root project directory

docker-compose exec mysql8 mysql --database=entimport -ppass
create table users
(
id bigint auto_increment primary key,
age bigint not null,
name varchar(255) not null,
last_name varchar(255) null comment 'surname'
);
create table cars
(
id bigint auto_increment primary key,
model varchar(255) not null,
color varchar(255) not null,
engine_size mediumint not null,
user_id bigint null,
constraint cars_owners foreign key (user_id) references users (id) on delete set null
);

Let's validate that we've created the tables mentioned above, in your MySQL shell, run:

show tables;
+---------------------+
| Tables_in_entimport |
+---------------------+
| cars |
| users |
+---------------------+

We should see two tables: users & cars

Initialize Ent Project#

Now that we've created our database, and a baseline schema to demonstrate our example, we need to create a Go project with Ent. In this phase I will explain how to do it. Since eventually we would like to use our imported schema, we need to create the Ent directory structure.

Initialize a new Go project inside a directory called entimport-example

go mod init entimport-example

Run Ent Init:

go run -mod=mod entgo.io/ent/cmd/ent init

The project should look like this:

โ”œโ”€โ”€ docker-compose.yaml
โ”œโ”€โ”€ ent
โ”‚ โ”œโ”€โ”€ generate.go
โ”‚ โ””โ”€โ”€ schema
โ””โ”€โ”€ go.mod

Install entimport#

OK, now the fun begins! We are finally ready to install entimport and see it in action.
Letโ€™s start by running entimport:

go run -mod=mod ariga.io/entimport/cmd/entimport -h

entimport will be downloaded and the command will print:

Usage of entimport:
-dialect string
database dialect (default "mysql")
-dsn string
data source name (connection information)
-schema-path string
output path for ent schema (default "./ent/schema")
-tables value
comma-separated list of tables to inspect (all if empty)

Run entimport#

We are now ready to import our MySQL schema to Ent!

We will do it with the following command:

This command will import all tables in our schema, you can also limit to specific tables using -tables flag.

go run ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

Like many unix tools, entimport doesn't print anything on a successful run. To verify that it ran properly, we will check the file system, and more specifically ent/schema directory.

โ”œโ”€โ”€ docker-compose.yaml
โ”œโ”€โ”€ ent
โ”‚ โ”œโ”€โ”€ generate.go
โ”‚ โ””โ”€โ”€ schema
โ”‚ โ”œโ”€โ”€ car.go
โ”‚ โ””โ”€โ”€ user.go
โ”œโ”€โ”€ go.mod
โ””โ”€โ”€ go.sum

Letโ€™s see what this gives us - remember that we had two schemas: the users schema and the cars schema with a one to many relationship. Letโ€™s see how entimport performed.

entimport-example/ent/schema/user.go
type User struct {
ent.Schema
}
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.Int("age"), field.String("name"), field.String("last_name").Optional().Comment("surname")}
}
func (User) Edges() []ent.Edge {
return []ent.Edge{edge.To("cars", Car.Type)}
}
func (User) Annotations() []schema.Annotation {
return nil
}
entimport-example/ent/schema/car.go
type Car struct {
ent.Schema
}
func (Car) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.String("model"), field.String("color"), field.Int32("engine_size"), field.Int("user_id").Optional()}
}
func (Car) Edges() []ent.Edge {
return []ent.Edge{edge.From("user", User.Type).Ref("cars").Unique().Field("user_id")}
}
func (Car) Annotations() []schema.Annotation {
return nil
}

entimport successfully created entities and their relation!

So far looks good, now letโ€™s actually try them out. First we must generate the Ent schema. We do it because Ent is a schema first ORM that generates Go code for interacting with different databases.

To run the Ent code generation:

go generate ./ent

Let's see our ent directory:

...
โ”œโ”€โ”€ ent
โ”‚ โ”œโ”€โ”€ car
โ”‚ โ”‚ โ”œโ”€โ”€ car.go
โ”‚ โ”‚ โ””โ”€โ”€ where.go
...
โ”‚ โ”œโ”€โ”€ schema
โ”‚ โ”‚ โ”œโ”€โ”€ car.go
โ”‚ โ”‚ โ””โ”€โ”€ user.go
...
โ”‚ โ”œโ”€โ”€ user
โ”‚ โ”‚ โ”œโ”€โ”€ user.go
โ”‚ โ”‚ โ””โ”€โ”€ where.go
...

Ent Example#

Letโ€™s run a quick example to verify that our schema works:

Create a file named example.go in the root of the project, with the following content:

This part of the example can be found here

entimport-example/example.go
package main
import (
"context"
"fmt"
"log"
"entimport-example/ent"
"entgo.io/ent/dialect"
_ "github.com/go-sql-driver/mysql"
)
func main() {
client, err := ent.Open(dialect.MySQL, "root:pass@tcp(localhost:3306)/entimport?parseTime=True")
if err != nil {
log.Fatalf("failed opening connection to mysql: %v", err)
}
defer client.Close()
ctx := context.Background()
example(ctx, client)
}

Let's try to add a user, write the following code at the end of the file:

entimport-example/example.go
func example(ctx context.Context, client *ent.Client) {
// Create a User.
zeev := client.User.
Create().
SetAge(33).
SetName("Zeev").
SetLastName("Manilovich").
SaveX(ctx)
fmt.Println("User created:", zeev)
}

Then run:

go run example.go

This should output:

# User created: User(id=1, age=33, name=Zeev, last_name=Manilovich)

Let's check with the database if the user was really added

SELECT *
FROM users
WHERE name = 'Zeev';
+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

Great! now let's play a little more with Ent and add some relations, add the following code at the end of the example() func:

make sure you add "entimport-example/ent/user" to the import() declaration

entimport-example/example.go
// Create Car.
vw := client.Car.
Create().
SetModel("volkswagen").
SetColor("blue").
SetEngineSize(1400).
SaveX(ctx)
fmt.Println("First car created:", vw)
// Update the user - add the car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(vw).SaveX(ctx)
// Query all cars that belong to the user.
cars := zeev.QueryCars().AllX(ctx)
fmt.Println("User cars:", cars)
// Create a second Car.
delorean := client.Car.
Create().
SetModel("delorean").
SetColor("silver").
SetEngineSize(9999).
SaveX(ctx)
fmt.Println("Second car created:", delorean)
// Update the user - add another car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(delorean).SaveX(ctx)
// Traverse the sub-graph.
cars = delorean.
QueryUser().
QueryCars().
AllX(ctx)
fmt.Println("User cars:", cars)

This part of the example can be found here

Now do: go run example.go.
After Running the code above, the database should hold a user with 2 cars in a O2M relation.

SELECT *
FROM users;
+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+
SELECT *
FROM cars;
+--+----------+------+-----------+-------+
|id|model |color |engine_size|user_id|
+--+----------+------+-----------+-------+
|1 |volkswagen|blue |1400 |1 |
|2 |delorean |silver|9999 |1 |
+--+----------+------+-----------+-------+

Syncing DB changes#

Since we want to keep the database in sync, we want entimport to be able to change the schema after the database was changed. Let's see how it works.

Run the following SQL code to add a phone column with a unique index to the users table:

alter table users
add phone varchar(255) null;
create unique index users_phone_uindex
on users (phone);

The table should look like this:

describe users;
+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | bigint | NO | PRI | NULL | auto_increment |
| age | bigint | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | YES | | NULL | |
| phone | varchar(255) | YES | UNI | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Now let's run entimport again to get the latest schema from our database:

go run -mod=mod ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

We can see that the user.go file was changed:

entimport-example/ent/schema/user.go
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), ..., field.String("phone").Optional().Unique()}
}

Now we can run go generate ./ent again and use the new schema to add a phone to the User entity.

Future Plans#

As mentioned above this initial version supports MySQL and PostgreSQL databases.
It also supports all types of SQL relations. I have plans to further upgrade the tool and add features such as missing PostgreSQL fields, default values, and more.

Wrapping Up#

In this post, I presented entimport, a tool that was anticipated and requested many times by the Ent community. I showed an example of how to use it with Ent. This tool is another addition to Ent schema import tools, which are designed to make the integration of ent even easier. For discussion and support, open an issue. The full example can be found in here. I hope you found this blog post useful!

For more Ent news and updates:

Generating OpenAPI Specification with Ent

In a previous blogpost, we presented to you elk - an extension to Ent enabling you to generate a fully-working Go CRUD HTTP API from your schema. In the today's post I'd like to introduce to you a shiny new feature that recently made it into elk: a fully compliant OpenAPI Specification (OAS) generator.

OAS (formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS file.

Getting Started#

The first step is to add the elk package to your project:

go install github.com/masseelch/elk

elk uses the Ent Extension API to integrate with Entโ€™s code-generation. This requires that we use the entc (ent codegen) package as described here to generate code for our project. Follow the next two steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent
//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS file from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS file#

The first step on our way to the OAS file is to create an Ent schema graph:

go run -mod=mod entgo.io/ent/cmd/ent init Fridge Compartment Item

To demonstrate elk's OAS generation capabilities, we will build together an example application. Suppose I have multiple fridges with multiple compartments, and my significant-other and I want to know its contents at all times. To supply ourselves with this incredibly useful information we will create a Go server with a RESTful API. To ease the creation of client applications that can communicate with our server, we will create an OpenAPI Specification file describing its API. Once we have that, we can build a frontend to manage fridges and contents in a language of our choice by using the Swagger Codegen! You can find an example that uses docker to generate a client here.

Let's create our schema:

ent/fridge.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}
// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}
// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}
ent/compartment.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}
// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}
// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}
ent/item.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}
// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}
// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

Now, let's generate the Ent code and the OAS file.

go generate ./...

In addition to the files Ent normally generates, another file named openapi.json has been created. Copy its contents and paste them into the Swagger Editor. You should see three groups: Compartment, Item and Fridge.

Swagger Editor Example

Swagger Editor Example

If you happen to open up the POST operation tab in the Fridge group, you see a description of the expected request data and all the possible responses. Great!

POST operation on Fridge

POST operation on Fridge

Basic Configuration#

The description of our API does not yet reflect what it does, let's change that! elk provides easy-to-use configuration builders to manipulate the generated OAS file. Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
"openapi.json",
// It is a Content-Management-System ...
elk.SpecTitle("Fridge CMS"),
// You can use CommonMark syntax (https://commonmark.org/).
elk.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
elk.SpecVersion("0.0.1"),
),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS file you can copy-paste into the Swagger Editor.

Updated API Info

Updated API Info

Operation configuration#

We do not want to expose endpoints to delete a fridge (seriously, who would ever want that?!). Fortunately, elk lets us configure what endpoints to generate and which to ignore. elks default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell elk to exclude the DELETE operation on the Fridge by using an elk.SchemaAnnotation:

ent/schema/fridge.go
// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
elk.DeletePolicy(elk.Exclude),
}
}

And voilร ! the DELETE operation is gone.

DELETE operation is gone

DELETE operation is gone

For more information about how elk's policies work and what you can do with it, have a look at the godoc.

Extend specification#

The one thing I should be interested the most in this example is the current contents of a fridge. You can customize the generated OAS to any extend you like by using Hooks. However, this would exceed the scope of this post. An example of how to add an endpoint fridges/{id}/contents to the generated OAS file can be found here.

Generating an OAS-implementing server#

I promised you in the beginning we'd create a server behaving as described in the OAS. elk makes this easy, all you have to do is call elk.GenerateHandlers() when you configure the extension:

ent/entc.go
[...]
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
[...]
),
+ elk.GenerateHandlers(),
)
[...]
}

Next, re-run code generation:

go generate ./...

Observe, that a new directory named ent/http was created.

ยป tree ent/http
ent/http
โ”œโ”€โ”€ create.go
โ”œโ”€โ”€ delete.go
โ”œโ”€โ”€ easyjson.go
โ”œโ”€โ”€ handler.go
โ”œโ”€โ”€ list.go
โ”œโ”€โ”€ read.go
โ”œโ”€โ”€ relations.go
โ”œโ”€โ”€ request.go
โ”œโ”€โ”€ response.go
โ””โ”€โ”€ update.go
0 directories, 10 files

You can spin-up the generated server with this very simple main.go:

package main
import (
"context"
"log"
"net/http"
"<your-project>/ent"
elk "<your-project>/ent/http"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)
func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Start listen to incoming requests.
if err := http.ListenAndServe(":8080", elk.NewHandler(c, zap.NewExample())); err != nil {
log.Fatal(err)
}
}
go run -mod=mod main.go

Our Fridge API server is up and running. With the generated OAS file and the Swagger Tooling you can now generate a client stub in any supported language and forget about writing a RESTful client ever ever again.

Wrapping Up#

In this post we introduced a new feature of elk - automatic OpenAPI Specification generation. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Extending Ent with the Extension API

A few months ago, Ariel made a silent but highly-impactful contribution to Ent's core, the Extension API. While Ent has had extension capabilities (such as Code-gen Hooks, External Templates, and Annotations) for a long time, there wasn't a convenient way to bundle together all of these moving parts into a coherent, self-contained component. The Extension API which we discuss in the post does exactly that.

Many open-source ecosystems thrive specifically because they excel at providing developers an easy and structured way to extend a small, core system. Much criticism has been made of the Node.js ecosystem (even by its original creator Ryan Dahl) but it is very hard to argue that the ease of publishing and consuming new npm modules facilitated the explosion in its popularity. I've discussed on my personal blog how protoc's plugin system works and how that made the Protobuf ecosystem thrive. In short, ecosystems are only created under modular designs.

In our post today, we will explore Ent's Extension API by building a toy example.

Getting Started#

The Extension API only works for projects use Ent's code-generation as a Go package. To set that up, after initializing your project, create a new file named ent/entc.go:

//+build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"entgo.io/ent/schema/field"
)
func main() {
err := entc.Generate("./schema", &gen.Config{})
if err != nil {
log.Fatal("running ent codegen:", err)
}
}

Next, modify ent/generate.go to invoke our entc file:

package ent
//go:generate go run entc.go

Creating our Extension#

All extension's must implement the Extension interface:

type Extension interface {
// Hooks holds an optional list of Hooks to apply
// on the graph before/after the code-generation.
Hooks() []gen.Hook
// Annotations injects global annotations to the gen.Config object that
// can be accessed globally in all templates. Unlike schema annotations,
// being serializable to JSON raw value is not mandatory.
//
// {{- with $.Config.Annotations.GQL }}
// {{/* Annotation usage goes here. */}}
// {{- end }}
//
Annotations() []Annotation
// Templates specifies a list of alternative templates
// to execute or to override the default.
Templates() []*gen.Template
// Options specifies a list of entc.Options to evaluate on
// the gen.Config before executing the code generation.
Options() []Option
}

To simplify the development of new extensions, developers can embed entc.DefaultExtension to create extensions without implementing all methods. In entc.go, add:

// ...
// GreetExtension implements entc.Extension.
type GreetExtension {
entc.DefaultExtension
}

Currently, our extension doesn't do anything. Next, let's connect it to our code-generation config. In entc.go, add our new extension to the entc.Generate invocation:

err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(&GreetExtension{})

Adding Templates#

External templates can be bundled into extensions to enhance Ent's core code-generation functionality. With our toy example, our goal is to add to each entity a generated method name Greet that returns a greeting with the type's name when invoked. We're aiming for something like:

func (u *User) Greet() string {
return "Greetings, User"
}

To do this, let's add a new external template file and place it in ent/templates/greet.tmpl:

ent/templates/greet.tmpl
{{ define "greet" }}
{{/* Add the base header for the generated file */}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
{{/* Loop over all nodes and add the Greet method */}}
{{ range $n := $.Nodes }}
{{ $receiver := $n.Receiver }}
func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "Greetings, {{ $n.Name }}"
}
{{ end }}
{{ end }}

Next, let's implement the Templates method:

ent/entc.go
func (*GreetExtension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("greet").ParseFiles("templates/greet.tmpl")),
}
}

Next, let's kick the tires on our extension. Add a new schema for the User type in a file named ent/schema/user.go:

package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email_address").
Unique(),
}
}

Next, run:

go generate ./...

Observe that a new file, ent/greet.go, was created, it contains:

ent/greet.go
// Code generated by entc, DO NOT EDIT.
package ent
func (u *User) Greet() string {
return "Greetings, User"
}

Great! Our extension was invoked from Ent's code-generation and produced the code we wanted for our schema!

Adding Annotations#

Annotations provide a way to supply users of our extension with an API to modify the behavior of code generation logic. To add annotations to our extension, implement the Annotations method. Suppose that for our GreetExtension we want to provide users with the ability to configure the greeting word in the generated code:

// GreetingWord implements entc.Annotation
type GreetingWord string
func (GreetingWord) Name() string {
return "GreetingWord"
}

Next, we add a word field to our GreetExtension struct:

type GreetExtension struct {
entc.DefaultExtension
Word GreetingWord
}

Next, implement the Annotations method:

func (s *GreetExtension) Annotations() []entc.Annotation {
return []entc.Annotation{
s.Word,
}
}

Now, from within your templates you can access the GreetingWord annotation. Modify ent/templates/greet.tmpl to use our new annotation:

func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "{{ $.Annotations.GreetingWord }}, {{ $n.Name }}"
}

Next, modify the code-generation configuration to set the GreetingWord annotation:

err := entc.Generate("./schema",
&gen.Config{},
entc.Extensions(&GreetExtension{
Word: GreetingWord("Shalom"),
}),
)

To see our annotation control the generated code, re-run:

go generate ./...

Finally, observe that the generated ent/greet.go was updated:

func (u *User) Greet() string {
return "Shalom, User"
}

Hooray! We added an option to use an annotation to control the greeting word in the generated Greet method!

More Possibilities#

In addition to templates and annotations, the Extension API allows developers to bundle gen.Hooks and entc.Options in extensions to further control the behavior of your code-generation. In this post we will not discuss these possibilities, but if you are interested in using them head over to the documentation.

Wrapping Up#

In this post we explored via a toy example how to use the Extension API to create new Ent code-generation extensions. As we've mentioned above, modular design that allows anyone to extend the core functionality of software is critical to the success of any ecosystem. We're seeing this claim start to realize with the Ent community, here's a list of some interesting projects that use the Extension API:

  • elk - an extension to generate REST endpoints from Ent schemas.
  • entgql - generate GraphQL servers from Ent schemas.
  • entviz - generate ER diagrams from Ent schemas.

And what about you? Do you have an idea for a useful Ent extension? I hope this post demonstrated that with the new Extension API, it is not a difficult task.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Ent Joins the Linux Foundation

Dear community,

Iโ€™m really happy to share something that has been in the works for quite some time. Yesterday (August 31st), a press release was issued announcing that Ent is joining the Linux Foundation.

Ent was open-sourced while I was working on it with my peers at Facebook in 2019. Since then, our community has grown, and weโ€™ve seen the adoption of Ent explode across many organizations of different sizes and sectors.

Our goal with moving under the governance of the Linux Foundation is to provide a corporate-neutral environment in which organizations can more easily contribute code, as weโ€™ve seen with other successful OSS projects such as Kubernetes and GraphQL. In addition, the move under the governance of the Linux Foundation positions Ent where we would like it to be, a core, infrastructure technology that organizations can trust because it is guaranteed to be here for a long time.

In terms of our community, nothing in particular changes, the repository has already moved to github.com/ent/ent a few months ago, the license remains Apache 2.0, and we are all 100% committed to the success of the project. Weโ€™re sure that the Linux Foundationโ€™s strong brand and organizational capabilities will help to build even more confidence in Ent and further foster its adoption in the industry.

I wanted to express my deep gratitude to the amazing folks at Facebook and the Linux Foundation that have worked hard on making this change possible and showing trust in our community to keep pushing the state-of-the-art in data access frameworks. This is a big achievement for our community, and so I want to take a moment to thank all of you for your contributions, support, and trust in this project.

On a personal note, I wanted to share that Rotem (a core contributor to Ent) and I have founded a new company, Ariga. Weโ€™re on a mission to build something that we call an โ€œoperational data graphโ€ that is heavily built using Ent, we will be sharing more details on that in the near future. You can expect to see many new exciting features contributed to the framework by our team. In addition, Ariga employees will dedicate time and resources to support and foster this wonderful community.

If you have any questions about this change or have any ideas on how to make it even better, please donโ€™t hesitate to reach out to me on our Slack channel.

Ariel โค๏ธ

Visualizing your Data Graph Using entviz

Joining an existing project with a large codebase can be a daunting task.

Understanding the data model of an application is key for developers to start working on an existing project. One commonly used tool to help overcome this challenge, and enable developers to grasp an application's data model is an ER (Entity Relation) diagram.

ER diagrams provide a visual representation of your data model, and details each field of the entities. Many tools can help create these, where one example is Jetbrains DataGrip, that can generate an ER diagram by connecting to and inspecting an existing database:

Datagrip ER diagram

DataGrip ER diagram example

Ent, a simple, yet powerful entity framework for Go, was originally developed inside Facebook specifically for dealing with projects with large and complex data models. This is why Ent uses code generation - it gives type-safety and code-completion out-of-the-box which helps explain the data model and improves developer velocity. On top of all of this, wouldn't it be great to automatically generate ER diagrams that maintain a high-level view of the data model in a visually appealing representation? (I mean, who doesn't love visualizations?)

Introducing entviz#

entviz is an ent extension that automatically generates a static HTML page that visualizes your data graph.

Entviz example output

Entviz example output

Most ER diagram generation tools need to connect to your database and introspect it, which makes it harder to maintain an up-to-date diagram of the database schema. Since entviz integrates directly to your Ent schema, it does not need to connect to your database, and it automatically generates fresh visualization every time you modify your schema.

If you want to know more about how entviz was implemented, checkout the implementation section.

See it in action#

First, let's add the entviz extension to our entc.go file:

go get github.com/hedwigz/entviz
If you are not familiar with entc you're welcome to read entc documentation to learn more about it. :::
ent/entc.go
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/hedwigz/entviz"
)
func main() {
err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(entviz.Extension{}))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Let's say we have a simple schema with a user entity and some fields:

ent/schema/user.go
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.String("email"),
field.Time("created").
Default(time.Now),
}
}

Now, entviz will automatically generate a visualization of our graph everytime we run:

go generate ./...

You should now see a new file called schema-viz.html in your ent directory:

$ ll ./ent/schema-viz.html
-rw-r--r-- 1 hedwigz hedwigz 7.3K Aug 27 09:00 schema-viz.html

Open the html file with your favorite browser to see the visualization

tutorial image

Next, let's add another entity named Post, and see how our visualization changes:

ent init Post
ent/schema/post.go
// Fields of the Post.
func (Post) Fields() []ent.Field {
return []ent.Field{
field.String("content"),
field.Time("created").
Default(time.Now),
}
}

Now we add an (O2M) edge from User to Post:

ent/schema/post.go
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("posts", Post.Type),
}
}

Finally, regenerate the code:

go generate ./...

Refresh your browser to see the updated result!

tutorial image 2

Implementation#

Entviz was implemented by extending ent via its extension API. The Ent extension API lets you aggregate multiple templates, hooks, options and annotations. For instance, entviz uses templates to add another go file, entviz.go, which exposes the ServeEntviz method that can be used as an http handler, like so:

func main() {
http.ListenAndServe("localhost:3002", ent.ServeEntviz())
}

We define an extension struct which embeds the default extension, and we export our template via the Templates method:

//go:embed entviz.go.tmpl
var tmplfile string
type Extension struct {
entc.DefaultExtension
}
func (Extension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("entviz").Parse(tmplfile)),
}
}

The template file is the code that we want to generate:

{{ define "entviz"}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
import (
_ "embed"
"net/http"
"strings"
"time"
)
//go:embed schema-viz.html
var html string
func ServeEntviz() http.Handler {
generateTime := time.Now()
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
http.ServeContent(w, req, "schema-viz.html", generateTime, strings.NewReader(html))
})
}
{{ end }}

That's it! now we have a new method in ent package.

Wrapping-Up#

We saw how ER diagrams help developers keep track of their data model. Next, we introduced entviz - an Ent extension that automatically generates an ER diagram for Ent schemas. We saw how entviz utilizes Ent's extension API to extend the code generation and add extra functionality. Finally, you got to see it in action by installing and use entviz in your own project. If you like the code and/or want to contribute - feel free to checkout the project on github.

Have questions? Need help with getting started? Feel free to join our Slack channel.

:::

Building Observable Ent Applications with Prometheus

Observability is a quality of a system that refers to how well its internal state can be measured externally. As a computer program evolves into a full-blown production system this quality becomes increasingly important. One of the ways to make a software system more observable is to export metrics, that is, to report in some externally visible way a quantitative description of the running system's state. For instance, to expose an HTTP endpoint where we can see how many errors occurred since the process has started. In this post, we will explore how to build more observable Ent applications using Prometheus.

What is Ent?#

Ent, is a simple, yet powerful entity framework for Go, that makes it easy to build and maintain applications with large data models.

What is Prometheus?#

Prometheus is an open source monitoring system developed by engineering at SoundCloud in 2012. It includes an embedded time series database and many integrations to third-party systems. The Prometheus client exposes the process's metrics via an HTTP endpoint (usually /metrics), this endpoint is discovered by the Prometheus scraper which polls the endpoint every interval (typically 30s) and writes it into a time-series database.

Prometheus is just an example of a class of metric collection backends. Many others, such as AWS CloudWatch, InfluxDB and others exist and are in wide use in the industry. Towards the end of this post, we will discuss a possible path to a unified, standards-based integration with any such backend.

Working with Prometheus#

To expose an application's metrics using Prometheus, we need to create a Prometheus Collector, a collector collects a set of metrics from your server.

In our example, we will be using two types of metrics that can be stored in a collector: Counters and Histograms. Counters are monotonically increasing cumulative metrics that represent how many times something has happened, commonly used to count the number of requests a server has processed or errors that have occurred. Histograms sample observations into buckets of configurable sizes and are commonly used to represent latency distributions (i.e how many requests returned in under 5ms, 10ms, 100ms, 1s, etc.) In addition, Prometheus allows metrics to be broken down into labels. This is useful for example for counting requests but breaking down the counter by endpoint name.

Letโ€™s see how to create such a collector using the official Go client. To do so, we will use a package in the client called promauto that simplifies the processes of creating collectors. A simple example of a collector that counts (for example, total request or number or request error):

package example
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
// List of dynamic labels
labelNames = []string{"endpoint", "error_code"}
// Create a counter collector
exampleCollector = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "endpoint_errors",
Help: "Number of errors in endpoints",
},
labelNames,
)
)
// When using you set the values of the dynamic labels and then increment the counter
func incrementError() {
exampleCollector.WithLabelValues("/create-user", "400").Inc()
}

Ent Hooks#

Hooks are a feature of Ent that allows adding custom logic before and after operations that change the data entities.

A mutation is an operation that changes something in the database. There are 5 types of mutations:

  1. Create.
  2. UpdateOne.
  3. Update.
  4. DeleteOne.
  5. Delete.

Hooks are functions that get an ent.Mutator and return a mutator back. They function similar to the popular HTTP middleware pattern.

package example
import (
"context"
"entgo.io/ent"
)
func exampleHook() ent.Hook {
//use this to init your hook
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Do something before mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// Do something if error after mutation.
}
// Do something after mutation.
return v, err
})
}
}

In Ent, there are two types of mutation hooks - schema hooks and runtime hooks. Schema hooks are mainly used for defining custom mutation logic on a specific entity type, for example, syncing entity creation to another system. Runtime hooks, on the other hand, are used to define more global logic for adding things like logging, metrics, tracing, etc.

For our use case, we should definitely use runtime hooks, because to be valuable we want to export metrics on all operations on all entity types:

package example
import (
"entprom/ent"
"entprom/ent/hook"
)
func main() {
client, _ := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
// Add a hook only on user mutations.
client.User.Use(exampleHook())
// Add a hook only on update operations.
client.Use(hook.On(exampleHook(), ent.OpUpdate|ent.OpUpdateOne))
}

Exporting Prometheus Metrics for an Ent Application#

With all of the introductions complete, letโ€™s cut to the chase and show how to use Prometheus and Ent hooks together to create an observable application. Our goal with this example is to export these metrics using a hook:

Metric NameDescription
ent_operation_totalNumber of ent mutation operations
ent_operation_errorNumber of failed ent mutation operations
ent_operation_duration_secondsTime in seconds per operation

Each of these metrics will be broken down by labels into two dimensions:

  • mutation_type: Entity type that is being mutated (User, BlogPost, Account etc.).
  • mutation_op: The operation that is being performed (Create, Delete etc.).

Letโ€™s start by defining our collectors:

//Ent dynamic dimensions
const (
mutationType = "mutation_type"
mutationOp = "mutation_op"
)
var entLabels = []string{mutationType, mutationOp}
// Create a collector for total operations counter
func initOpsProcessedTotal() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_total",
Help: "Number of ent mutation operations",
},
entLabels,
)
}
// Create a collector for error counter
func initOpsProcessedError() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_error",
Help: "Number of failed ent mutation operations",
},
entLabels,
)
}
// Create a collector for duration histogram collector
func initOpsDuration() *prometheus.HistogramVec {
return promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "ent_operation_duration_seconds",
Help: "Time in seconds per operation",
},
entLabels,
)
}

Next, letโ€™s define our new hook:

// Hook init collectors, count total at beginning error on mutation error and duration also after.
func Hook() ent.Hook {
opsProcessedTotal := initOpsProcessedTotal()
opsProcessedError := initOpsProcessedError()
opsDuration := initOpsDuration()
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Before mutation, start measuring time.
start := time.Now()
// Extract dynamic labels from mutation.
labels := prometheus.Labels{mutationType: m.Type(), mutationOp: m.Op().String()}
// Increment total ops counter.
opsProcessedTotal.With(labels).Inc()
// Execute mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// In case of error increment error counter.
opsProcessedError.With(labels).Inc()
}
// Stop time measure.
duration := time.Since(start)
// Record duration in seconds.
opsDuration.With(labels).Observe(duration.Seconds())
return v, err
})
}
}

Connecting the Prometheus Collector to our Service#

After defining our hook, letโ€™s see next how to connect it to our application and how to use Prometheus to serve an endpoint that exposes the metrics in our collectors:

package main
import (
"context"
"log"
"net/http"
"entprom"
"entprom/ent"
_ "github.com/mattn/go-sqlite3"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func createClient() *ent.Client {
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
ctx := context.Background()
// Run the auto migration tool.
if err := c.Schema.Create(ctx); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
return c
}
func handler(client *ent.Client) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
// Run operations.
_, err := client.User.Create().SetName("a8m").Save(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
}
}
func main() {
// Create Ent client and migrate
client := createClient()
// Use the hook
client.Use(entprom.Hook())
// Simple handler to run actions on our DB.
http.HandleFunc("/", handler(client))
// This endpoint sends metrics to the prometheus to collect
http.Handle("/metrics", promhttp.Handler())
log.Println("server starting on port 8080")
// Run the server
log.Fatal(http.ListenAndServe(":8080", nil))
}

After a few times of accessing / on our server (using curl or a browser), go to /metrics. There you will see the output from the Prometheus client:

# HELP ent_operation_duration_seconds Time in seconds per operation
# TYPE ent_operation_duration_seconds histogram
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.005"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.01"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.025"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.05"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.25"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="2.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="10"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="+Inf"} 2
ent_operation_duration_seconds_sum{mutation_op="OpCreate",mutation_type="User"} 0.000265669
ent_operation_duration_seconds_count{mutation_op="OpCreate",mutation_type="User"} 2
# HELP ent_operation_error Number of failed ent mutation operations
# TYPE ent_operation_error counter
ent_operation_error{mutation_op="OpCreate",mutation_type="User"} 1
# HELP ent_operation_total Number of ent mutation operations
# TYPE ent_operation_total counter
ent_operation_total{mutation_op="OpCreate",mutation_type="User"} 2

In the top part, we can see the histogram calculated, it calculates the number of operations in each โ€œbucketโ€. After that, we can see the number of total operations and the number of errors. Each metric is followed by its description that can be seen when querying with Prometheus dashboard.

The Prometheus client is only one component of the Prometheus architecture. To run a complete system including a scraper that will poll your endpoint, a Prometheus that will store your metrics and can answer queries, and a simple UI to interact with it, I recommend reading the official documentation or use the docker-compose.yaml in this example repo.

Future Work on Observability in Ent#

As weโ€™ve mentioned above, there is an abundance of metric collections backends available today, Prometheus being just one of many successful projects. While these solutions differ in many dimensions (self-hosted vs SaaS, different storage engines with different query languages, and more) - from the metric reporting client perspective, they are virtually identical.

In cases like these, good software engineering principles suggest that the concrete backend should be abstracted away from the client using an interface. This interface can then be implemented by backends so client applications can easily switch between the different implementations. Such changes are happening in recent years in our industry. Consider, for example, the Open Container Initiative or the Service Mesh Interface: both are initiatives that strive to define a standard interface for a problem space. This interface is supposed to create an ecosystem of implementations of the standard. In the observability space, the exact same convergence is occurring with OpenCensus and OpenTracing currently merging into OpenTelemetry.

As nice as it would be to publish an Ent + Prometheus extension similar to the one presented in this post, we are firm believers that observability should be solved with a standards-based approach. We invite everyone to join the discussion on what is the right way to do this for Ent.

Wrap-Up#

We started this post by presenting Prometheus, a popular open-source monitoring solution. Next, we reviewed โ€œHooksโ€, a feature of Ent that allows adding custom logic before and after operations that change the data entities. We then showed how to integrate the two to create observable applications using Ent. Finally, we discussed the future of observability in Ent and invited everyone to join the discussion to shape it.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Announcing the Upsert API in v0.9.0

It has been almost 4 months since our last release, and for a good reason. Version 0.9.0 which was released today is packed with some highly-anticipated features. Perhaps at the top of the list, is a feature that has been in discussion for more than a year in a half and was one of the most commonly requested features in the Ent User Survey: the Upsert API!

Version 0.9.0 adds support for "Upsert" style statements using a new feature flag: sql/upsert. Ent has a collection of feature flags that can be switched on to add more features to the code generated by Ent. This is used as both a mechanism to allow opt-in to some features that are not necessarily desired in every project and as a way to run experiments of features that may one day become part of Ent's core.

In this post, we will introduce the new feature, the places where it is useful, and demonstrate how to use it.

Upsert#

"Upsert" is a commonly-used term in data systems that is a portmanteau of "update" and "insert" which usually refers to a statement that attempts to insert a record to a table, and if a uniqueness constraint is violated (e.g. a record by that ID already exists) that record is updated instead. While none of the popular relational databases have a specific UPSERT statement, most of them support ways of achieving this type of behavior.

For example, assume we have a table with this definition in an SQLite database:

CREATE TABLE users (
id integer PRIMARY KEY AUTOINCREMENT,
email varchar(255) UNIQUE,
name varchar(255)
)

If we try to execute the same insert twice:

INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');
INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');

We get this error:

[2021-08-05 06:49:22] UNIQUE constraint failed: users.email

In many cases, it is useful to have write operations be idempotent, meaning we can run them many times in a row while leaving the system in the same state.

In other cases, it is not desirable to query if a record exists before trying to create it. For these kinds of situations, SQLite supports the ON CONFLICT clause in INSERT statements. To instruct SQLite to override an existing value with the new one we can execute:

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT (email) DO UPDATE SET email=excluded.email, name=excluded.name;

If we prefer to keep the existing values, we can use the DO NOTHING conflict action:

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT DO NOTHING;

Sometimes we want to merge the two versions in some way, we can use the DO UPDATE action a little differently to achieve do something like:

INSERT INTO users (email, full_name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT (email) DO UPDATE SET name=excluded.name || ' (formerly: ' || users.name || ')'

In this case, after our second INSERT the value for the name column would be: Tamir, Rotem (formerly: Rotem Tamir). Not very useful, but hopefully you can see that you can do cool things this way.

Upsert with Ent#

Assume we have an existing Ent project with an entity similar to the users table described above:

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email").
Unique(),
field.String("name"),
}
}

As the Upsert API is a newly released feature, make sure to update your ent version using:

go get -u entgo.io/ent@v0.9.0

Next, add the sql/upsert feature flag to your code-generation flags, in ent/generate.go:

package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature sql/upsert ./schema

Next, re-run code generation for your project:

go generate ./...

Observe that a new method named OnConflict was added to the ent/user_create.go file:

// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
// of the `INSERT` statement. For example:
//
// client.User.Create().
// SetEmailAddress(v).
// OnConflict(
// // Update the row with the new values
// // the was proposed for insertion.
// sql.ResolveWithNewValues(),
// ).
// // Override some of the fields with custom
// // update values.
// Update(func(u *ent.UserUpsert) {
// SetEmailAddress(v+v)
// }).
// Exec(ctx)
//
func (uc *UserCreate) OnConflict(opts ...sql.ConflictOption) *UserUpsertOne {
uc.conflict = opts
return &UserUpsertOne{
create: uc,
}
}

This (along with more new generated code) will serve us in achieving upsert behavior for our User entity. To explore this, let's first start by writing a test to reproduce the uniqueness constraint error:

func TestUniqueConstraintFails(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// Create the user for the first time.
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// Try to create a user with the same email the second time.
_, err := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
Save(ctx)
if !ent.IsConstraintError(err) {
log.Fatalf("expected second created to fail with constraint error")
}
log.Printf("second query failed with: %v", err)
}

The test passes:

=== RUN TestUniqueConstraintFails
2021/08/05 07:12:11 second query failed with: ent: constraint failed: insert node to table "users": UNIQUE constraint failed: users.email
--- PASS: TestUniqueConstraintFails (0.00s)

Next, let's see how to instruct Ent to override the existing values with the new in case a conflict occurs:

func TestUpsertReplace(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// Create the user for the first time.
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// Try to create a user with the same email the second time.
// This time we set ON CONFLICT behavior, and use the `UpdateNewValues`
// modifier.
newID := client.User.Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
UpdateNewValues().
// we use the IDX method to receive the ID
// of the created/updated entity
IDX(ctx)
// We expect the ID of the originally created user to be the same as
// the one that was just updated.
if orig.ID != newID {
log.Fatalf("expected upsert to update an existing record")
}
current := client.User.GetX(ctx, orig.ID)
if current.Name != "Tamir, Rotem" {
log.Fatalf("expected upsert to replace with the new values")
}
}

Running our test:

=== RUN TestUpsertReplace
--- PASS: TestUpsertReplace (0.00s)

Alternatively, we can use the Ignore modifier to instruct Ent to keep the old version when resolving the conflict. Let's write a test that shows this:

func TestUpsertIgnore(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// Create the user for the first time.
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// Try to create a user with the same email the second time.
// This time we set ON CONFLICT behavior, and use the `Ignore`
// modifier.
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
Ignore().
ExecX(ctx)
current := client.User.GetX(ctx, orig.ID)
if current.FullName != orig.FullName {
log.Fatalf("expected upsert to keep the original version")
}
}

You can read more about the feature in the Feature Flag or Upsert API documentation.

Wrapping Up#

In this post, we presented the Upsert API, a long-anticipated capability, that is available by feature-flag in Ent v0.9.0. We discussed where upserts are commonly used in applications and the way they are implemented using common relational databases. Finally, we showed a simple example of how to get started with the Upsert API using Ent.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Generate a fully-working Go CRUD HTTP API with Ent

When we say that one of the core principles of Ent is "Schema as Code", we mean by that more than "Ent's DSL for defining entities and their edges is done using regular Go code". Ent's unique approach, compared to many other ORMs, is to express all of the logic related to an entity, as code, directly in the schema definition.

With Ent, developers can write all authorization logic (called "Privacy" within Ent), and all of the mutation side-effects (called "Hooks" within Ent) directly on the schema. Having everything in the same place can be very convenient, but its true power is revealed when paired with code generation.

If schemas are defined this way, it becomes possible to generate code for fully-working production-grade servers automatically. If we move the responsibility for authorization decisions and custom side effects from the RPC layer to the data layer, the implementation of the basic CRUD (Create, Read, Update and Delete) endpoints becomes generic to the extent that it can be machine-generated. This is exactly the idea behind the popular GraphQL and gRPC Ent extensions.

Today, we would like to present a new Ent extension named elk that can automatically generate fully-working, RESTful API endpoints from your Ent schemas. elk strives to automate all of the tedious work of setting up the basic CRUD endpoints for every entity you add to your graph, including logging, validation of the request body, eager loading relations and serializing, all while leaving reflection out of sight and maintaining type-safety.

Letโ€™s get started!

Getting Started#

The final version of the code below can be found on GitHub.

Start by creating a new Go project:

mkdir elk-example
cd elk-example
go mod init elk-example

Invoke the ent code generator and create two schemas: User, Pet:

go run -mod=mod entgo.io/ent/cmd/ent init Pet User

Your project should now look like this:

.
โ”œโ”€โ”€ ent
โ”‚ โ”œโ”€โ”€ generate.go
โ”‚ โ””โ”€โ”€ schema
โ”‚ โ”œโ”€โ”€ pet.go
โ”‚ โ””โ”€โ”€ user.go
โ”œโ”€โ”€ go.mod
โ””โ”€โ”€ go.sum

Next, add the elk package to our project:

go get -u github.com/masseelch/elk

elk uses the Ent extension API to integrate with Entโ€™s code-generation. This requires that we use the entc (ent codegen) package as described here. Follow the next three steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
elk.GenerateHandlers(),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent
//go:generate go run -mod=mod entc.go

3/. elk uses some external packages in its generated code. Currently, you have to get those packages manually once when setting up elk:

go get github.com/mailru/easyjson github.com/masseelch/render github.com/go-chi/chi/v5 go.uber.org/zap

With these steps complete, all is set up for using our elk-powered ent! To learn more about Ent, how to connect to different types of databases, run migrations or work with entities head over to the Setup Tutorial.

Generating HTTP CRUD Handlers with elk#

To generate the fully-working HTTP handlers we need first create an Ent schema definition. Open and edit ent/schema/pet.go:

package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)
// Pet holds the schema definition for the Pet entity.
type Pet struct {
ent.Schema
}
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}

We added two fields to our Pet entity: name and age. The ent.Schema just defines the fields of our entity. To generate runnable code from our schema, run:

go generate ./...

Observe that in addition to the files Ent would normally generate, another directory named ent/http was created. These files were generated by the elk extension and contain the code for the generated HTTP handlers. For example, here is some of the generated code for a read-operation on the Pet entity:

const (
PetCreate Routes = 1 << iota
PetRead
PetUpdate
PetDelete
PetList
PetRoutes = 1<<iota - 1
)
// PetHandler handles http crud operations on ent.Pet.
type PetHandler struct {
handler
client *ent.Client
log *zap.Logger
}
func NewPetHandler(c *ent.Client, l *zap.Logger) *PetHandler {
return &PetHandler{
client: c,
log: l.With(zap.String("handler", "PetHandler")),
}
}
// Read fetches the ent.Pet identified by a given url-parameter from the
// database and renders it to the client.
func (h *PetHandler) Read(w http.ResponseWriter, r *http.Request) {
l := h.log.With(zap.String("method", "Read"))
// ID is URL parameter.
id, err := strconv.Atoi(chi.URLParam(r, "id"))
if err != nil {
l.Error("error getting id from url parameter", zap.String("id", chi.URLParam(r, "id")), zap.Error(err))
render.BadRequest(w, r, "id must be an integer greater zero")
return
}
// Create the query to fetch the Pet
q := h.client.Pet.Query().Where(pet.ID(id))
e, err := q.Only(r.Context())
if err != nil {
switch {
case ent.IsNotFound(err):
msg := stripEntError(err)
l.Info(msg, zap.Error(err), zap.Int("id", id))
render.NotFound(w, r, msg)
case ent.IsNotSingular(err):
msg := stripEntError(err)
l.Error(msg, zap.Error(err), zap.Int("id", id))
render.BadRequest(w, r, msg)
default:
l.Error("could not read pet", zap.Error(err), zap.Int("id", id))
render.InternalServerError(w, r, nil)
}
return
}
l.Info("pet rendered", zap.Int("id", id))
easyjson.MarshalToHTTPResponseWriter(NewPet2657988899View(e), w)
}

Next, letโ€™s see how to create an actual RESTful HTTP server that can manage your Pet entities. Create a file named main.go and add the following content:

package main
import (
"context"
"fmt"
"log"
"net/http"
"elk-example/ent"
elk "elk-example/ent/http"
"github.com/go-chi/chi/v5"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)
func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "./ent.db?_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Router and Logger.
r, l := chi.NewRouter(), zap.NewExample()
// Create the pet handler.
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l).Mount(r, elk.PetRoutes)
})
// Start listen to incoming requests.
fmt.Println("Server running")
defer fmt.Println("Server stopped")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}

Next, start the server:

go run -mod=mod main.go

Congratulations! We now have a running server serving the Pets API. We could ask the server for a list of all pets in the database, but there are none yet. Letโ€™s create one first:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Kuro","age":3}' 'localhost:8080/pets'

You should get this response:

{
"age": 3,
"id": 1,
"name": "Kuro"
}

If you head over to the terminal where the server is running you can also see elks built in logging:

{
"level": "info",
"msg": "pet rendered",
"handler": "PetHandler",
"method": "Create",
"id": 1
}

elk uses zap for logging. To learn more about it, have a look at its documentation.

Relations#

To illustrate more of elks features, letโ€™s extend our graph. Edit ent/schema/user.go and ent/schema/pet.go:

ent/schema/pet.go
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique(),
}
}
ent/schema/user.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type),
}
}

We have now created a One-To-Many relation between the Pet and User schemas: A pet belongs to a user, and a user can have multiple pets.

Rerun the code generator:

go generate ./...

Do not forget to register the UserHandler on our router. Just add the following lines to main.go:

[...]
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l, v).Mount(r, elk.PetRoutes)
})
+ // Create the user handler.
+ r.Route("/users", func(r chi.Router) {
+ elk.NewUserHandler(c, l, v).Mount(r, elk.UserRoutes)
+ })
// Start listen to incoming requests.
fmt.Println("Server running")
[...]

After restarting the server we can create a User that owns the previously created Pet named Kuro:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Elk","age":30,"owner":1}' 'localhost:8080/users'

The server returns the following response:

{
"age": 30,
"edges": {},
"id": 1,
"name": "Elk"
}

From the output we can see that the user has been created, but the edges are empty. elk does not include edges in its output by default. You can configure elk to render edges using a feature called "serialization groups". Annotate your schemas with the elk.SchemaAnnotation and elk.Annotation structs. Edit ent/schema/user.go and add those:

// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type).
Annotations(elk.Groups("user")),
}
}
// Annotations of the User.
func (User) Annotations() []schema.Annotation {
return []schema.Annotation{elk.ReadGroups("user")}
}

The elk.Annotations added to the fields and edges tell elk to eager-load them and add them to the payload if the " user" group is requested. The elk.SchemaAnnotation is used to make the read-operation of the UserHandler request " user". Note, that any fields that do not have a serialization group attached are included by default. Edges, however, are excluded, unless configured otherwise.

Next, letโ€™s regenerate the code once again, and restart the server. You should now see the pets of a user rendered if you read a resource:

curl 'localhost:8080/users/1'
{
"age": 30,
"edges": {
"pets": [
{
"id": 1,
"name": "Kuro",
"age": 3,
"edges": {}
}
]
},
"id": 1,
"name": "Elk"
}

Request validation#

Our current schemas allow to set a negative age for pets or users and we can create pets without an owner (as we did with Kuro). Ent has built-in support for basic validation. In some cases you may want to validate requests made against your API before passing their payload to Ent. elk uses this package to define validation rules and validate data. We can create separate validation rules for Create and Update operations using elk.Annotation. In our example, letโ€™s assume that we want our Pet schema to only allow ages greater than zero and to disallow creating a pet without an owner. Edit ent/schema/pet.go:

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age").
Positive().
Annotations(
elk.CreateValidation("required,gt=0"),
elk.UpdateValidation("gt=0"),
),
}
}
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique().
Required().
Annotations(elk.Validation("required")),
}
}

Next, regenerate the code and restart the server. To test our new validation rules, letโ€™s try to create a pet with invalid age and without an owner:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Bob","age":-2}' 'localhost:8080/pets'

elk returns a detailed response that includes information about which validations failed:

{
"code": 400,
"status": "Bad Request",
"errors": {
"Age": "This value failed validation on 'gt:0'.",
"Owner": "This value is required."
}
}

Note the uppercase field names. The validator package uses the structs field name to generate its validation errors, but you can simply override this, as stated in the example .

If you do not define any validation rules, elk will not include the validation-code in its generated output. elks` request validation is especially useful if you'd wanted to do cross-field-validation.

Upcoming Features#

We hope you agree that elk has some useful features already, but there are still many exciting things to come. The next version of elk will include::

  • Fully working flutter frontend to administrate your nodes
  • Integration of Entโ€™s validation in the current request validator
  • More transport formats (currently only JSON)

Conclusion#

This post has shown just a small part of what elk can do. To see some more examples of what you can do with it, head over to the projectโ€™s README on GitHub. I hope that with elk-powered Ent, you and your fellow developers can automate some repetitive tasks that go into building RESTful APIs and focus on more meaningful work.

elk is in an early stage of development, we welcome any suggestion or feedback and if you are willing to help we'd be very glad. The GitHub Issues is a wonderful place for you to reach out for help, feedback, suggestions and contribution.

About the Author#

MasseElch is a software engineer from the windy, flat, north of Germany. When not hiking with his dog Kuro (who has his own Instagram channel ๐Ÿ˜ฑ) or playing hide-and-seek with his son, he drinks coffee and enjoys coding.