メインコンテンツへスキップする

Introducing sqlcomment - Database Performance Analysis with Ent and Google's Sqlcommenter

Ent is a powerful Entity framework that helps developers write neat code that is translated into (possibly complex) database queries. As the usage of your application grows, it doesn’t take long until you stumble upon performance issues with your database. Troubleshooting database performance issues is notoriously hard, especially when you’re not equipped with the right tools.

The following example shows how Ent query code is translated into an SQL query.

ent example 1

Example 1 - ent code is translated to SQL query

Traditionally, it has been very difficult to correlate between poorly performing database queries and the application code that is generating them. Database performance analysis tools could help point out slow queries by analyzing database server logs, but how could they be traced back to the application?

Sqlcommenter#

Earlier this year, Google introduced Sqlcommenter. Sqlcommenter is

an open source library that addresses the gap between the ORM libraries and understanding database performance. Sqlcommenter gives application developers visibility into which application code is generating slow queries and maps application traces to database query plansIn other words, Sqlcommenter adds application context metadata to SQL queries. This information can then be used to provide meaningful insights. It does so by adding [SQL comments](https://en.wikipedia.org/wiki/SQL_syntax#Comments) to the query that carry metadata but are ignored by the database during query execution. For example, the following query contains a comment that carries metadata about the application that issued it (`users-mgr`), which controller and route triggered it (`users` and `user_rename`, respectively), and the database driver that was used (`ent:v0.9.1`):
update users set username = ‘hedwigz’ where id = 88
/*application='users-mgr',controller='users',route='user_rename',db_driver='ent:v0.9.1'*/

To get a taste of how the analysis of metadata collected from Sqlcommenter metadata can help us better understand performance issues of our application, consider the following example: Google Cloud recently launched Cloud SQL Insights, a cloud-based SQL performance analysis product. In the image below, we see a screenshot from the Cloud SQL Insights Dashboard that shows that the HTTP route 'api/users' is causing many locks on the database. We can also see that this query got called 16,067 times in the last 6 hours.

Cloud SQL insights

Screenshot from Cloud SQL Insights Dashboard

This is the power of SQL tags - they provide you correlation between your application-level information and your Database monitors.

sqlcomment#

sqlcomment is an Ent driver that adds metadata to SQL queries using comments following the sqlcommenter specification. By wrapping an existing Ent driver with sqlcomment, users can leverage any tool that supports the standard to triage query performance issues. Without further ado, let’s see sqlcomment in action.

First, to install sqlcomment run:

go get ariga.io/sqlcomment

sqlcomment is wrapping an underlying SQL driver, therefore, we need to open our SQL connection using ent’s sql module, instead of Ent's popular helper ent.Open.

Make sure to import entgo.io/ent/dialect/sql in the following snippet :::
// Create db driver.
db, err := sql.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("Failed to connect to database: %v", err)
}
// Create sqlcomment driver which wraps sqlite driver.
drv := sqlcomment.NewDriver(db,
sqlcomment.WithDriverVerTag(),
sqlcomment.WithTags(sqlcomment.Tags{
sqlcomment.KeyApplication: "my-app",
sqlcomment.KeyFramework: "net/http",
}),
)
// Create and configure ent client.
client := ent.NewClient(ent.Driver(drv))

Now, whenever we execute a query, sqlcomment will suffix our SQL query with the tags we set up. If we were to run the following query:

client.User.
Update().
Where(
user.Or(
user.AgeGT(30),
user.Name("bar"),
),
user.HasFollowers(),
).
SetName("foo").
Save()

Ent would output the following commented SQL query:

UPDATE `users`
SET `name` = ?
WHERE (
`users`.`age` > ?
OR `users`.`name` = ?
)
AND `users`.`id` IN (
SELECT `user_following`.`follower_id`
FROM `user_following`
)
/*application='my-app',db_driver='ent:v0.9.1',framework='net%2Fhttp'*/

As you can see, Ent outputted an SQL query with a comment at the end, containing all the relevant information associated with that query.

sqlcomment supports more tags, and has integrations with OpenTelemetry and OpenCensus. To see more examples and scenarios, please visit the github repo.

Wrapping-Up#

In this post I showed how adding metadata to queries using SQL comments can help correlate between source code and database queries. Next, I introduced sqlcomment - an Ent driver that adds SQL tags to all of your queries. Finally, I got to see sqlcomment in action, by installing and configuring it with Ent. If you like the code and/or want to contribute - feel free to checkout the project on GitHub.

Have questions? Need help with getting started? Feel free to join our Slack channel.

:::

Announcing entcache - a Cache Driver for Ent

While working on Ariga's operational data graph query engine, we saw the opportunity to greatly improve the performance of many use cases by building a robust caching library. As heavy users of Ent, it was only natural for us to implement this layer as an extension to Ent. In this post, I will briefly explain what caches are, how they fit into software architectures, and present entcache - a cache driver for Ent.

Caching is a popular strategy for improving application performance. It is based on the observation that the speed for retrieving data using different types of media can vary within many orders of magnitude. Jeff Dean famously presented the following numbers in a lecture about "Software Engineering Advice from Building Large-Scale Distributed Systems":

cache numbers

These numbers show things that experienced software engineers know intuitively: reading from memory is faster than reading from disk, retrieving data from the same data center is faster than going out to the internet to fetch it. We add to that, that some calculations are expensive and slow, and that fetching a precomputed result can be much faster (and less expensive) than recomputing it every time.

The collective intelligence of Wikipedia tells us that a Cache is "a hardware or software component that stores data so that future requests for that data can be served faster". In other words, if we can store a query result in RAM, we can fulfill a request that depends on it much faster than if we need to go over the network to our database, have it read data from disk, run some computation on it, and only then send it back to us (over a network).

However, as software engineers, we should remember that caching is a notoriously complicated topic. As the phrase coined by early-day Netscape engineer Phil Karlton says: "There are only two hard things in Computer Science: cache invalidation and naming things". For instance, in systems that rely on strong consistency, a cache entry may be stale, therefore causing the system to behave incorrectly. For this reason, take great care and pay attention to detail when you are designing caches into your system architectures.

Presenting entcache#

The entcache package provides its users with a new Ent driver that can wrap one of the existing SQL drivers available for Ent. On a high level, it decorates the Query method of the given driver, and for each call:

  1. Generates a cache key (i.e. hash) from its arguments (i.e. statement and parameters).

  2. Checks the cache to see if the results for this query are already available. If they are (this is called a cache-hit), the database is skipped and results are returned to the caller from memory.

  3. If the cache does not contain an entry for the query, the query is passed to the database.

  4. After the query is executed, the driver records the raw values of the returned rows (sql.Rows), and stores them in the cache with the generated cache key.

The package provides a variety of options to configure the TTL of the cache entries, control the hash function, provide custom and multi-level cache stores, evict and skip cache entries. See the full documentation in https://pkg.go.dev/ariga.io/entcache.

As we mentioned above, correctly configuring caching for an application is a delicate task, and so entcache provides developers with different caching levels that can be used with it:

  1. A context.Context-based cache. Usually, attached to a request and does not work with other cache levels. It is used to eliminate duplicate queries that are executed by the same request.

  2. A driver-level cache used by the ent.Client. An application usually creates a driver per database, and therefore, we treat it as a process-level cache.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different process.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that composed of an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Let's demonstrate this by explaining the context.Context based cache.

Context-Level Cache#

The ContextLevel option configures the driver to work with a context.Context level cache. The context is usually attached to a request (e.g. *http.Request) and is not available in multi-level mode. When this option is used as a cache store, the attached context.Context carries an LRU cache (can be configured differently), and the driver stores and searches entries in the LRU cache when queries are executed.

This option is ideal for applications that require strong consistency, but still want to avoid executing duplicate database queries on the same request. For example, given the following GraphQL query:

query($ids: [ID!]!) {
nodes(ids: $ids) {
... on User {
id
name
todos {
id
owner {
id
name
}
}
}
}
}

A naive solution for resolving the above query will execute, 1 for getting N users, another N queries for getting the todos of each user, and a query for each todo item for getting its owner (read more about the N+1 Problem).

However, Ent provides a unique approach for resolving such queries(read more in Ent website) and therefore, only 3 queries will be executed in this case. 1 for getting N users, 1 for getting the todo items of all users, and 1 query for getting the owners of all todo items.

With entcache, the number of queries may be reduced to 2, as the first and last queries are identical (see code example).

context-level-cache

The different levels are explained in depth in the repository README.

Getting Started#

If you are not familiar with how to set up a new Ent project, complete Ent Setting Up tutorial first.

First, go get the package using the following command.

go get ariga.io/entcache

After installing entcache, you can easily add it to your project with the snippet below:

// Open the database connection.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatal("opening database", err)
}
// Decorates the sql.Driver with entcache.Driver.
drv := entcache.NewDriver(db)
// Create an ent.Client.
client := ent.NewClient(ent.Driver(drv))
// Tell the entcache.Driver to skip the caching layer
// when running the schema migration.
if client.Schema.Create(entcache.Skip(ctx)); err != nil {
log.Fatal("running schema migration", err)
}
// Run queries.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}
// The query below is cached.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}

To see more advanced examples, head over to the repo's examples directory.

Wrapping Up#

In this post, I presented “entcache” a new cache driver for Ent that I developed while working on Ariga's Operational Data Graph query engine. We started the discussion by briefly mentioning the motivation for including caches in software systems. Following that, we described the features and capabilities of entcache and concluded with a short example of how you can set it up in your application.

There are a few features we are working on, and wish to work on, but need help from the community to design them properly (solving cache invalidation, anyone? ;)). If you are interested to contribute, reach out to me on the Ent Slack channel.

For more Ent news and updates:

Generating Ent Schemas from Existing SQL Databases

A few months ago the Ent project announced the Schema Import Initiative, its goal is to help support many use cases for generating Ent schemas from external resources. Today, I'm happy to share a project I’ve been working on: entimport - an importent (pun intended) command line tool designed to create Ent schemas from existing SQL databases. This is a feature that has been requested by the community for some time, so I hope many people find it useful. It can help ease the transition of an existing setup from another language or ORM to Ent. It can also help with use cases where you would like to access the same data from different platforms (such as to automatically sync between them).
The first version supports both MySQL and PostgreSQL databases, with some limitations described below. Support for other relational databases such as SQLite is in the works.

Getting Started#

To give you an idea of how entimport works, I want to share a quick example of end to end usage with a MySQL database. On a high-level, this is what we’re going to do:

  1. Create a Database and Schema - we want to show how entimport can generate an Ent schema for an existing database. We will first create a database, then define some tables in it that we can import into Ent.
  2. Initialize an Ent Project - we will use the Ent CLI to create the needed directory structure and an Ent schema generation script.
  3. Install entimport
  4. Run entimport against our demo database - next, we will import the database schema that we’ve created into our Ent project.
  5. Explain how to use Ent with our generated schemas.

Let's get started.

Create a Database#

We’re going to start by creating a database. The way I prefer to do it is to use a Docker container. We will use a docker-compose which will automatically pass all needed parameters to the MySQL container.

Start the project in a new directory called entimport-example. Create a file named docker-compose.yaml and paste the following content inside:

version: "3.7"
services:
mysql8:
platform: linux/amd64
image: mysql
environment:
MYSQL_DATABASE: entimport
MYSQL_ROOT_PASSWORD: pass
healthcheck:
test: mysqladmin ping -ppass
ports:
- "3306:3306"

This file contains the service configuration for a MySQL docker container. Run it with the following command:

docker-compose up -d

Next, we will create a simple schema. For this example we will use a relation between two entities:

  • User
  • Car

Connect to the database using MySQL shell, you can do it with the following command:

Make sure you run it from the root project directory

docker-compose exec mysql8 mysql --database=entimport -ppass
create table users
(
id bigint auto_increment primary key,
age bigint not null,
name varchar(255) not null,
last_name varchar(255) null comment 'surname'
);
create table cars
(
id bigint auto_increment primary key,
model varchar(255) not null,
color varchar(255) not null,
engine_size mediumint not null,
user_id bigint null,
constraint cars_owners foreign key (user_id) references users (id) on delete set null
);

Let's validate that we've created the tables mentioned above, in your MySQL shell, run:

show tables;
+---------------------+
| Tables_in_entimport |
+---------------------+
| cars |
| users |
+---------------------+

We should see two tables: users & cars

Initialize Ent Project#

Now that we've created our database, and a baseline schema to demonstrate our example, we need to create a Go project with Ent. In this phase I will explain how to do it. Since eventually we would like to use our imported schema, we need to create the Ent directory structure.

Initialize a new Go project inside a directory called entimport-example

go mod init entimport-example

Run Ent Init:

go run -mod=mod entgo.io/ent/cmd/ent init

The project should look like this:

├── docker-compose.yaml
├── ent
│ ├── generate.go
│ └── schema
└── go.mod

Install entimport#

OK, now the fun begins! We are finally ready to install entimport and see it in action.
Let’s start by running entimport:

go run -mod=mod ariga.io/entimport/cmd/entimport -h

entimport will be downloaded and the command will print:

Usage of entimport:
-dialect string
database dialect (default "mysql")
-dsn string
data source name (connection information)
-schema-path string
output path for ent schema (default "./ent/schema")
-tables value
comma-separated list of tables to inspect (all if empty)

Run entimport#

We are now ready to import our MySQL schema to Ent!

We will do it with the following command:

This command will import all tables in our schema, you can also limit to specific tables using -tables flag.

go run ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

Like many unix tools, entimport doesn't print anything on a successful run. To verify that it ran properly, we will check the file system, and more specifically ent/schema directory.

├── docker-compose.yaml
├── ent
│ ├── generate.go
│ └── schema
│ ├── car.go
│ └── user.go
├── go.mod
└── go.sum

Let’s see what this gives us - remember that we had two schemas: the users schema and the cars schema with a one to many relationship. Let’s see how entimport performed.

entimport-example/ent/schema/user.go
type User struct {
ent.Schema
}
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.Int("age"), field.String("name"), field.String("last_name").Optional().Comment("surname")}
}
func (User) Edges() []ent.Edge {
return []ent.Edge{edge.To("cars", Car.Type)}
}
func (User) Annotations() []schema.Annotation {
return nil
}
entimport-example/ent/schema/car.go
type Car struct {
ent.Schema
}
func (Car) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.String("model"), field.String("color"), field.Int32("engine_size"), field.Int("user_id").Optional()}
}
func (Car) Edges() []ent.Edge {
return []ent.Edge{edge.From("user", User.Type).Ref("cars").Unique().Field("user_id")}
}
func (Car) Annotations() []schema.Annotation {
return nil
}

entimport successfully created entities and their relation!

So far looks good, now let’s actually try them out. First we must generate the Ent schema. We do it because Ent is a schema first ORM that generates Go code for interacting with different databases.

To run the Ent code generation:

go generate ./ent

Let's see our ent directory:

...
├── ent
│ ├── car
│ │ ├── car.go
│ │ └── where.go
...
│ ├── schema
│ │ ├── car.go
│ │ └── user.go
...
│ ├── user
│ │ ├── user.go
│ │ └── where.go
...

Ent Example#

Let’s run a quick example to verify that our schema works:

Create a file named example.go in the root of the project, with the following content:

This part of the example can be found here

entimport-example/example.go
package main
import (
"context"
"fmt"
"log"
"entimport-example/ent"
"entgo.io/ent/dialect"
_ "github.com/go-sql-driver/mysql"
)
func main() {
client, err := ent.Open(dialect.MySQL, "root:pass@tcp(localhost:3306)/entimport?parseTime=True")
if err != nil {
log.Fatalf("failed opening connection to mysql: %v", err)
}
defer client.Close()
ctx := context.Background()
example(ctx, client)
}

Let's try to add a user, write the following code at the end of the file:

entimport-example/example.go
func example(ctx context.Context, client *ent.Client) {
// Create a User.
zeev := client.User.
Create().
SetAge(33).
SetName("Zeev").
SetLastName("Manilovich").
SaveX(ctx)
fmt.Println("User created:", zeev)
}

Then run:

go run example.go

This should output:

# User created: User(id=1, age=33, name=Zeev, last_name=Manilovich)

Let's check with the database if the user was really added

SELECT *
FROM users
WHERE name = 'Zeev';
+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

Great! now let's play a little more with Ent and add some relations, add the following code at the end of the example() func:

make sure you add "entimport-example/ent/user" to the import() declaration

entimport-example/example.go
// Create Car.
vw := client.Car.
Create().
SetModel("volkswagen").
SetColor("blue").
SetEngineSize(1400).
SaveX(ctx)
fmt.Println("First car created:", vw)
// Update the user - add the car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(vw).SaveX(ctx)
// Query all cars that belong to the user.
cars := zeev.QueryCars().AllX(ctx)
fmt.Println("User cars:", cars)
// Create a second Car.
delorean := client.Car.
Create().
SetModel("delorean").
SetColor("silver").
SetEngineSize(9999).
SaveX(ctx)
fmt.Println("Second car created:", delorean)
// Update the user - add another car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(delorean).SaveX(ctx)
// Traverse the sub-graph.
cars = delorean.
QueryUser().
QueryCars().
AllX(ctx)
fmt.Println("User cars:", cars)

This part of the example can be found here

Now do: go run example.go.
After Running the code above, the database should hold a user with 2 cars in a O2M relation.

SELECT *
FROM users;
+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+
SELECT *
FROM cars;
+--+----------+------+-----------+-------+
|id|model |color |engine_size|user_id|
+--+----------+------+-----------+-------+
|1 |volkswagen|blue |1400 |1 |
|2 |delorean |silver|9999 |1 |
+--+----------+------+-----------+-------+

Syncing DB changes#

Since we want to keep the database in sync, we want entimport to be able to change the schema after the database was changed. Let's see how it works.

Run the following SQL code to add a phone column with a unique index to the users table:

alter table users
add phone varchar(255) null;
create unique index users_phone_uindex
on users (phone);

The table should look like this:

describe users;
+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | bigint | NO | PRI | NULL | auto_increment |
| age | bigint | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | YES | | NULL | |
| phone | varchar(255) | YES | UNI | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Now let's run entimport again to get the latest schema from our database:

go run -mod=mod ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

We can see that the user.go file was changed:

entimport-example/ent/schema/user.go
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), ..., field.String("phone").Optional().Unique()}
}

Now we can run go generate ./ent again and use the new schema to add a phone to the User entity.

Future Plans#

As mentioned above this initial version supports MySQL and PostgreSQL databases.
It also supports all types of SQL relations. I have plans to further upgrade the tool and add features such as missing PostgreSQL fields, default values, and more.

Wrapping Up#

In this post, I presented entimport, a tool that was anticipated and requested many times by the Ent community. I showed an example of how to use it with Ent. This tool is another addition to Ent schema import tools, which are designed to make the integration of ent even easier. For discussion and support, open an issue. The full example can be found in here. I hope you found this blog post useful!

For more Ent news and updates:

Generating OpenAPI Specification with Ent

In a previous blogpost, we presented to you elk - an extension to Ent enabling you to generate a fully-working Go CRUD HTTP API from your schema. In the today's post I'd like to introduce to you a shiny new feature that recently made it into elk: a fully compliant OpenAPI Specification (OAS) generator.

OAS (formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS file.

Getting Started#

The first step is to add the elk package to your project:

go install github.com/masseelch/elk

elk uses the Ent Extension API to integrate with Ent’s code-generation. This requires that we use the entc (ent codegen) package as described here to generate code for our project. Follow the next two steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent
//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS file from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS file#

The first step on our way to the OAS file is to create an Ent schema graph:

go run -mod=mod entgo.io/ent/cmd/ent init Fridge Compartment Item

To demonstrate elk's OAS generation capabilities, we will build together an example application. Suppose I have multiple fridges with multiple compartments, and my significant-other and I want to know its contents at all times. To supply ourselves with this incredibly useful information we will create a Go server with a RESTful API. To ease the creation of client applications that can communicate with our server, we will create an OpenAPI Specification file describing its API. Once we have that, we can build a frontend to manage fridges and contents in a language of our choice by using the Swagger Codegen! You can find an example that uses docker to generate a client here.

Let's create our schema:

ent/fridge.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}
// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}
// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}
ent/compartment.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}
// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}
// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}
ent/item.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}
// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}
// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

Now, let's generate the Ent code and the OAS file.

go generate ./...

In addition to the files Ent normally generates, another file named openapi.json has been created. Copy its contents and paste them into the Swagger Editor. You should see three groups: Compartment, Item and Fridge.

Swagger Editor Example

Swagger Editor Example

If you happen to open up the POST operation tab in the Fridge group, you see a description of the expected request data and all the possible responses. Great!

POST operation on Fridge

POST operation on Fridge

Basic Configuration#

The description of our API does not yet reflect what it does, let's change that! elk provides easy-to-use configuration builders to manipulate the generated OAS file. Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
"openapi.json",
// It is a Content-Management-System ...
elk.SpecTitle("Fridge CMS"),
// You can use CommonMark syntax (https://commonmark.org/).
elk.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
elk.SpecVersion("0.0.1"),
),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS file you can copy-paste into the Swagger Editor.

Updated API Info

Updated API Info

Operation configuration#

We do not want to expose endpoints to delete a fridge (seriously, who would ever want that?!). Fortunately, elk lets us configure what endpoints to generate and which to ignore. elks default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell elk to exclude the DELETE operation on the Fridge by using an elk.SchemaAnnotation:

ent/schema/fridge.go
// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
elk.DeletePolicy(elk.Exclude),
}
}

And voilà! the DELETE operation is gone.

DELETE operation is gone

DELETE operation is gone

For more information about how elk's policies work and what you can do with it, have a look at the godoc.

Extend specification#

The one thing I should be interested the most in this example is the current contents of a fridge. You can customize the generated OAS to any extend you like by using Hooks. However, this would exceed the scope of this post. An example of how to add an endpoint fridges/{id}/contents to the generated OAS file can be found here.

Generating an OAS-implementing server#

I promised you in the beginning we'd create a server behaving as described in the OAS. elk makes this easy, all you have to do is call elk.GenerateHandlers() when you configure the extension:

ent/entc.go
[...]
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
[...]
),
+ elk.GenerateHandlers(),
)
[...]
}

Next, re-run code generation:

go generate ./...

Observe, that a new directory named ent/http was created.

» tree ent/http
ent/http
├── create.go
├── delete.go
├── easyjson.go
├── handler.go
├── list.go
├── read.go
├── relations.go
├── request.go
├── response.go
└── update.go
0 directories, 10 files

You can spin-up the generated server with this very simple main.go:

package main
import (
"context"
"log"
"net/http"
"<your-project>/ent"
elk "<your-project>/ent/http"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)
func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Start listen to incoming requests.
if err := http.ListenAndServe(":8080", elk.NewHandler(c, zap.NewExample())); err != nil {
log.Fatal(err)
}
}
go run -mod=mod main.go

Our Fridge API server is up and running. With the generated OAS file and the Swagger Tooling you can now generate a client stub in any supported language and forget about writing a RESTful client ever ever again.

Wrapping Up#

In this post we introduced a new feature of elk - automatic OpenAPI Specification generation. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Extending Ent with the Extension API

A few months ago, Ariel made a silent but highly-impactful contribution to Ent's core, the Extension API. While Ent has had extension capabilities (such as Code-gen Hooks, External Templates, and Annotations) for a long time, there wasn't a convenient way to bundle together all of these moving parts into a coherent, self-contained component. The Extension API which we discuss in the post does exactly that.

Many open-source ecosystems thrive specifically because they excel at providing developers an easy and structured way to extend a small, core system. Much criticism has been made of the Node.js ecosystem (even by its original creator Ryan Dahl) but it is very hard to argue that the ease of publishing and consuming new npm modules facilitated the explosion in its popularity. I've discussed on my personal blog how protoc's plugin system works and how that made the Protobuf ecosystem thrive. In short, ecosystems are only created under modular designs.

In our post today, we will explore Ent's Extension API by building a toy example.

Getting Started#

The Extension API only works for projects use Ent's code-generation as a Go package. To set that up, after initializing your project, create a new file named ent/entc.go:

//+build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"entgo.io/ent/schema/field"
)
func main() {
err := entc.Generate("./schema", &gen.Config{})
if err != nil {
log.Fatal("running ent codegen:", err)
}
}

Next, modify ent/generate.go to invoke our entc file:

package ent
//go:generate go run entc.go

Creating our Extension#

All extension's must implement the Extension interface:

type Extension interface {
// Hooks holds an optional list of Hooks to apply
// on the graph before/after the code-generation.
Hooks() []gen.Hook
// Annotations injects global annotations to the gen.Config object that
// can be accessed globally in all templates. Unlike schema annotations,
// being serializable to JSON raw value is not mandatory.
//
// {{- with $.Config.Annotations.GQL }}
// {{/* Annotation usage goes here. */}}
// {{- end }}
//
Annotations() []Annotation
// Templates specifies a list of alternative templates
// to execute or to override the default.
Templates() []*gen.Template
// Options specifies a list of entc.Options to evaluate on
// the gen.Config before executing the code generation.
Options() []Option
}

To simplify the development of new extensions, developers can embed entc.DefaultExtension to create extensions without implementing all methods. In entc.go, add:

// ...
// GreetExtension implements entc.Extension.
type GreetExtension {
entc.DefaultExtension
}

Currently, our extension doesn't do anything. Next, let's connect it to our code-generation config. In entc.go, add our new extension to the entc.Generate invocation:

err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(&GreetExtension{})

Adding Templates#

External templates can be bundled into extensions to enhance Ent's core code-generation functionality. With our toy example, our goal is to add to each entity a generated method name Greet that returns a greeting with the type's name when invoked. We're aiming for something like:

func (u *User) Greet() string {
return "Greetings, User"
}

To do this, let's add a new external template file and place it in ent/templates/greet.tmpl:

ent/templates/greet.tmpl
{{ define "greet" }}
{{/* Add the base header for the generated file */}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
{{/* Loop over all nodes and add the Greet method */}}
{{ range $n := $.Nodes }}
{{ $receiver := $n.Receiver }}
func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "Greetings, {{ $n.Name }}"
}
{{ end }}
{{ end }}

Next, let's implement the Templates method:

ent/entc.go
func (*GreetExtension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("greet").ParseFiles("templates/greet.tmpl")),
}
}

Next, let's kick the tires on our extension. Add a new schema for the User type in a file named ent/schema/user.go:

package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email_address").
Unique(),
}
}

Next, run:

go generate ./...

Observe that a new file, ent/greet.go, was created, it contains:

ent/greet.go
// Code generated by entc, DO NOT EDIT.
package ent
func (u *User) Greet() string {
return "Greetings, User"
}

Great! Our extension was invoked from Ent's code-generation and produced the code we wanted for our schema!

Adding Annotations#

Annotations provide a way to supply users of our extension with an API to modify the behavior of code generation logic. To add annotations to our extension, implement the Annotations method. Suppose that for our GreetExtension we want to provide users with the ability to configure the greeting word in the generated code:

// GreetingWord implements entc.Annotation
type GreetingWord string
func (GreetingWord) Name() string {
return "GreetingWord"
}

Next, we add a word field to our GreetExtension struct:

type GreetExtension struct {
entc.DefaultExtension
Word GreetingWord
}

Next, implement the Annotations method:

func (s *GreetExtension) Annotations() []entc.Annotation {
return []entc.Annotation{
s.Word,
}
}

Now, from within your templates you can access the GreetingWord annotation. Modify ent/templates/greet.tmpl to use our new annotation:

func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "{{ $.Annotations.GreetingWord }}, {{ $n.Name }}"
}

Next, modify the code-generation configuration to set the GreetingWord annotation:

err := entc.Generate("./schema",
&gen.Config{},
entc.Extensions(&GreetExtension{
Word: GreetingWord("Shalom"),
}),
)

To see our annotation control the generated code, re-run:

go generate ./...

Finally, observe that the generated ent/greet.go was updated:

func (u *User) Greet() string {
return "Shalom, User"
}

Hooray! We added an option to use an annotation to control the greeting word in the generated Greet method!

More Possibilities#

In addition to templates and annotations, the Extension API allows developers to bundle gen.Hooks and entc.Options in extensions to further control the behavior of your code-generation. In this post we will not discuss these possibilities, but if you are interested in using them head over to the documentation.

Wrapping Up#

In this post we explored via a toy example how to use the Extension API to create new Ent code-generation extensions. As we've mentioned above, modular design that allows anyone to extend the core functionality of software is critical to the success of any ecosystem. We're seeing this claim start to realize with the Ent community, here's a list of some interesting projects that use the Extension API:

  • elk - an extension to generate REST endpoints from Ent schemas.
  • entgql - generate GraphQL servers from Ent schemas.
  • entviz - generate ER diagrams from Ent schemas.

And what about you? Do you have an idea for a useful Ent extension? I hope this post demonstrated that with the new Extension API, it is not a difficult task.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Ent Joins the Linux Foundation

Dear community,

I’m really happy to share something that has been in the works for quite some time. Yesterday (August 31st), a press release was issued announcing that Ent is joining the Linux Foundation.

Ent was open-sourced while I was working on it with my peers at Facebook in 2019. Since then, our community has grown, and we’ve seen the adoption of Ent explode across many organizations of different sizes and sectors.

Our goal with moving under the governance of the Linux Foundation is to provide a corporate-neutral environment in which organizations can more easily contribute code, as we’ve seen with other successful OSS projects such as Kubernetes and GraphQL. In addition, the move under the governance of the Linux Foundation positions Ent where we would like it to be, a core, infrastructure technology that organizations can trust because it is guaranteed to be here for a long time.

In terms of our community, nothing in particular changes, the repository has already moved to github.com/ent/ent a few months ago, the license remains Apache 2.0, and we are all 100% committed to the success of the project. We’re sure that the Linux Foundation’s strong brand and organizational capabilities will help to build even more confidence in Ent and further foster its adoption in the industry.

I wanted to express my deep gratitude to the amazing folks at Facebook and the Linux Foundation that have worked hard on making this change possible and showing trust in our community to keep pushing the state-of-the-art in data access frameworks. This is a big achievement for our community, and so I want to take a moment to thank all of you for your contributions, support, and trust in this project.

On a personal note, I wanted to share that Rotem (a core contributor to Ent) and I have founded a new company, Ariga. We’re on a mission to build something that we call an “operational data graph” that is heavily built using Ent, we will be sharing more details on that in the near future. You can expect to see many new exciting features contributed to the framework by our team. In addition, Ariga employees will dedicate time and resources to support and foster this wonderful community.

If you have any questions about this change or have any ideas on how to make it even better, please don’t hesitate to reach out to me on our Slack channel.

Ariel ❤️

entviz を使用してデータグラフを視覚化する

大規模なコードベースを持つ既存のプロジェクトに参加することは、困難な作業です。

アプリケーションのデータモデルを理解することは、開発者が既存のプロジェクトに取り組む際の鍵となります。 ER (Entity Relation) 図は、この課題を克服し、開発者がアプリケーションのデータモデルを把握するためによく使われるツールです。

ER図は、データモデルを視覚的に表現するもので、エンティティの各フィールドを詳細に示しています。 多くのツールがこれらを作成するのに役立ちます。たとえばJetbrain DataGripです。 既存のデータベースに接続して検査することでER図を生成きます。

Datagrip ER diagram

DataGrip ER 図の例

Entは、シンプルで強力なGoのためのエンティティフレームワークで、元々は大規模で複雑なデータモデルのプロジェクトに対応するために、Facebook社内で開発されました。 Entがコード生成を使用するのはこのためです。型安全性とコード補完をすぐに行うことで、データモデルの説明を助け、開発者の速度を向上させます。 さらに、データモデルのハイレベルなビューを維持するER図を、視覚的に魅力的な表現で自動的に生成できたら素晴らしいと思いませんか? (視覚的な表現が嫌いな人はいないでしょう)

entvizの導入#

entviz は、データグラフを視覚化する静的な HTML ページを自動的に生成するent拡張です。

Entviz example output

Entviz 出力サンプル

ほとんどのERダイアグラム生成ツールは、データベースに接続し、それをイントロスペクトする必要があります。 データベーススキーマの最新のダイアグラムを維持するのが難しくなります entviz は Entスキーマに直接統合されているため、データベースに接続する必要はありません。 スキーマを変更するたびに自動的に新しい視覚化が生成されます

entviz がどのように実装されたかについて詳しく知りたい場合は、 内部実装 をご覧ください。

挙動を確認する#

まず、entviz 拡張を entc.go ファイルに追加しましょう。

go get github.com/hedwigz/entviz
entc に詳しくない場合は、 entc documentation を読んでください。 :::
ent/entc.go
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/hedwigz/entviz"
)
func main() {
err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(entviz.Extension{}))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

ユーザーエンティティといくつかのフィールドを持つ単純なスキーマがあるとします。

ent/schema/user.go
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.String("email"),
field.Time("created").
Default(time.Now),
}
}

これにより、entviz は実行するたびに自動的にグラフの視覚化を生成します。

go generate ./...

entディレクトリに schema-viz.html という新しいファイルが表示されるはずです:

$ ll ./ent/schema-viz.html
-rw-r--r-- 1 hedwigz hedwigz 7.3K Aug 27 09:00 schema-viz.html

HTMLファイルをお気に入りのブラウザで開き、表示を確認します

tutorial image

次に、Postという名前の別のエンティティを追加し、可視化がどのように変化するかを見てみましょう。

ent init Post
ent/schema/post.go
// Fields of the Post.
func (Post) Fields() []ent.Field {
return []ent.Field{
field.String("content"),
field.Time("created").
Default(time.Now),
}
}

次に、User から Post に (O2M)エッジを追加します。

ent/schema/post.go
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("posts", Post.Type),
}
}

最後に、コードを再生成します。

go generate ./...

更新された結果を確認するにはブラウザを更新してください!

tutorial image 2

内部実装#

Entvizはextension APIを用いてentを拡張することで実装されました。 Entのextension APIを使用すると、 templateshooksoptionsannotations.をまとめ上げることができます たとえば、entvizではテンプレートを使って、entviz.goという別のgoファイルを追加しています。このentviz.goはServeEntvizメソッドを公開しており、httpハンドラとして次のように使用できます。

func main() {
http.ListenAndServe("localhost:3002", ent.ServeEntviz())
}

デフォルトのextensionを埋め込むextension構造体を定義し、 Template メソッドを使用してテンプレートをエクスポートします。

//go:embed entviz.go.tmpl
var tmplfile string
type Extension struct {
entc.DefaultExtension
}
func (Extension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("entviz").Parse(tmplfile)),
}
}

テンプレートファイルは、生成するコードです。

{{ define "entviz"}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
import (
_ "embed"
"net/http"
"strings"
"time"
)
//go:embed schema-viz.html
var html string
func ServeEntviz() http.Handler {
generateTime := time.Now()
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
http.ServeContent(w, req, "schema-viz.html", generateTime, strings.NewReader(html))
})
}
{{ end }}

以上です! entパッケージに新しいメソッドがあります

まとめ#

ER図は、開発者がデータモデルを把握するのに役立つことがわかりました。 次に、EntスキーマのER図を自動生成するEnt拡張機能であるentvizを紹介しました。 entvizがEntの拡張APIを利用してコード生成を拡張し、追加機能を追加する方法を見ました。 最後に、自分のプロジェクトにentvizをインストールして使用することで、実際に動作を確認しました。 コードが気に入ったら、そして貢献したいと思ったら、githubでプロジェクトをチェックアウトすることができます。

さらにご質問がありますか? 始めるにあたって助けが必要ですか? Slack チャンネル に参加してください。

:::

Building Observable Ent Applications with Prometheus

Observability is a quality of a system that refers to how well its internal state can be measured externally. As a computer program evolves into a full-blown production system this quality becomes increasingly important. One of the ways to make a software system more observable is to export metrics, that is, to report in some externally visible way a quantitative description of the running system's state. For instance, to expose an HTTP endpoint where we can see how many errors occurred since the process has started. In this post, we will explore how to build more observable Ent applications using Prometheus.

What is Ent?#

Ent, is a simple, yet powerful entity framework for Go, that makes it easy to build and maintain applications with large data models.

What is Prometheus?#

Prometheus is an open source monitoring system developed by engineering at SoundCloud in 2012. It includes an embedded time series database and many integrations to third-party systems. The Prometheus client exposes the process's metrics via an HTTP endpoint (usually /metrics), this endpoint is discovered by the Prometheus scraper which polls the endpoint every interval (typically 30s) and writes it into a time-series database.

Prometheus is just an example of a class of metric collection backends. Many others, such as AWS CloudWatch, InfluxDB and others exist and are in wide use in the industry. Towards the end of this post, we will discuss a possible path to a unified, standards-based integration with any such backend.

Working with Prometheus#

To expose an application's metrics using Prometheus, we need to create a Prometheus Collector, a collector collects a set of metrics from your server.

In our example, we will be using two types of metrics that can be stored in a collector: Counters and Histograms. Counters are monotonically increasing cumulative metrics that represent how many times something has happened, commonly used to count the number of requests a server has processed or errors that have occurred. Histograms sample observations into buckets of configurable sizes and are commonly used to represent latency distributions (i.e how many requests returned in under 5ms, 10ms, 100ms, 1s, etc.) In addition, Prometheus allows metrics to be broken down into labels. This is useful for example for counting requests but breaking down the counter by endpoint name.

Let’s see how to create such a collector using the official Go client. To do so, we will use a package in the client called promauto that simplifies the processes of creating collectors. A simple example of a collector that counts (for example, total request or number or request error):

package example
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
// List of dynamic labels
labelNames = []string{"endpoint", "error_code"}
// Create a counter collector
exampleCollector = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "endpoint_errors",
Help: "Number of errors in endpoints",
},
labelNames,
)
)
// When using you set the values of the dynamic labels and then increment the counter
func incrementError() {
exampleCollector.WithLabelValues("/create-user", "400").Inc()
}

Ent Hooks#

Hooks are a feature of Ent that allows adding custom logic before and after operations that change the data entities.

A mutation is an operation that changes something in the database. There are 5 types of mutations:

  1. Create.
  2. UpdateOne.
  3. Update.
  4. DeleteOne.
  5. Delete.

Hooks are functions that get an ent.Mutator and return a mutator back. They function similar to the popular HTTP middleware pattern.

package example
import (
"context"
"entgo.io/ent"
)
func exampleHook() ent.Hook {
//use this to init your hook
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Do something before mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// Do something if error after mutation.
}
// Do something after mutation.
return v, err
})
}
}

In Ent, there are two types of mutation hooks - schema hooks and runtime hooks. Schema hooks are mainly used for defining custom mutation logic on a specific entity type, for example, syncing entity creation to another system. Runtime hooks, on the other hand, are used to define more global logic for adding things like logging, metrics, tracing, etc.

For our use case, we should definitely use runtime hooks, because to be valuable we want to export metrics on all operations on all entity types:

package example
import (
"entprom/ent"
"entprom/ent/hook"
)
func main() {
client, _ := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
// Add a hook only on user mutations.
client.User.Use(exampleHook())
// Add a hook only on update operations.
client.Use(hook.On(exampleHook(), ent.OpUpdate|ent.OpUpdateOne))
}

Exporting Prometheus Metrics for an Ent Application#

With all of the introductions complete, let’s cut to the chase and show how to use Prometheus and Ent hooks together to create an observable application. Our goal with this example is to export these metrics using a hook:

Metric NameDescription
ent_operation_totalNumber of ent mutation operations
ent_operation_errorNumber of failed ent mutation operations
ent_operation_duration_secondsTime in seconds per operation

Each of these metrics will be broken down by labels into two dimensions:

  • mutation_type: Entity type that is being mutated (User, BlogPost, Account etc.).
  • mutation_op: The operation that is being performed (Create, Delete etc.).

Let’s start by defining our collectors:

//Ent dynamic dimensions
const (
mutationType = "mutation_type"
mutationOp = "mutation_op"
)
var entLabels = []string{mutationType, mutationOp}
// Create a collector for total operations counter
func initOpsProcessedTotal() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_total",
Help: "Number of ent mutation operations",
},
entLabels,
)
}
// Create a collector for error counter
func initOpsProcessedError() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_error",
Help: "Number of failed ent mutation operations",
},
entLabels,
)
}
// Create a collector for duration histogram collector
func initOpsDuration() *prometheus.HistogramVec {
return promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "ent_operation_duration_seconds",
Help: "Time in seconds per operation",
},
entLabels,
)
}

Next, let’s define our new hook:

// Hook init collectors, count total at beginning error on mutation error and duration also after.
func Hook() ent.Hook {
opsProcessedTotal := initOpsProcessedTotal()
opsProcessedError := initOpsProcessedError()
opsDuration := initOpsDuration()
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Before mutation, start measuring time.
start := time.Now()
// Extract dynamic labels from mutation.
labels := prometheus.Labels{mutationType: m.Type(), mutationOp: m.Op().String()}
// Increment total ops counter.
opsProcessedTotal.With(labels).Inc()
// Execute mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// In case of error increment error counter.
opsProcessedError.With(labels).Inc()
}
// Stop time measure.
duration := time.Since(start)
// Record duration in seconds.
opsDuration.With(labels).Observe(duration.Seconds())
return v, err
})
}
}

Connecting the Prometheus Collector to our Service#

After defining our hook, let’s see next how to connect it to our application and how to use Prometheus to serve an endpoint that exposes the metrics in our collectors:

package main
import (
"context"
"log"
"net/http"
"entprom"
"entprom/ent"
_ "github.com/mattn/go-sqlite3"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func createClient() *ent.Client {
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
ctx := context.Background()
// Run the auto migration tool.
if err := c.Schema.Create(ctx); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
return c
}
func handler(client *ent.Client) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
// Run operations.
_, err := client.User.Create().SetName("a8m").Save(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
}
}
func main() {
// Create Ent client and migrate
client := createClient()
// Use the hook
client.Use(entprom.Hook())
// Simple handler to run actions on our DB.
http.HandleFunc("/", handler(client))
// This endpoint sends metrics to the prometheus to collect
http.Handle("/metrics", promhttp.Handler())
log.Println("server starting on port 8080")
// Run the server
log.Fatal(http.ListenAndServe(":8080", nil))
}

After a few times of accessing / on our server (using curl or a browser), go to /metrics. There you will see the output from the Prometheus client:

# HELP ent_operation_duration_seconds Time in seconds per operation
# TYPE ent_operation_duration_seconds histogram
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.005"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.01"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.025"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.05"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.25"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="2.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="10"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="+Inf"} 2
ent_operation_duration_seconds_sum{mutation_op="OpCreate",mutation_type="User"} 0.000265669
ent_operation_duration_seconds_count{mutation_op="OpCreate",mutation_type="User"} 2
# HELP ent_operation_error Number of failed ent mutation operations
# TYPE ent_operation_error counter
ent_operation_error{mutation_op="OpCreate",mutation_type="User"} 1
# HELP ent_operation_total Number of ent mutation operations
# TYPE ent_operation_total counter
ent_operation_total{mutation_op="OpCreate",mutation_type="User"} 2

In the top part, we can see the histogram calculated, it calculates the number of operations in each “bucket”. After that, we can see the number of total operations and the number of errors. Each metric is followed by its description that can be seen when querying with Prometheus dashboard.

The Prometheus client is only one component of the Prometheus architecture. To run a complete system including a scraper that will poll your endpoint, a Prometheus that will store your metrics and can answer queries, and a simple UI to interact with it, I recommend reading the official documentation or use the docker-compose.yaml in this example repo.

Future Work on Observability in Ent#

As we’ve mentioned above, there is an abundance of metric collections backends available today, Prometheus being just one of many successful projects. While these solutions differ in many dimensions (self-hosted vs SaaS, different storage engines with different query languages, and more) - from the metric reporting client perspective, they are virtually identical.

In cases like these, good software engineering principles suggest that the concrete backend should be abstracted away from the client using an interface. This interface can then be implemented by backends so client applications can easily switch between the different implementations. Such changes are happening in recent years in our industry. Consider, for example, the Open Container Initiative or the Service Mesh Interface: both are initiatives that strive to define a standard interface for a problem space. This interface is supposed to create an ecosystem of implementations of the standard. In the observability space, the exact same convergence is occurring with OpenCensus and OpenTracing currently merging into OpenTelemetry.

As nice as it would be to publish an Ent + Prometheus extension similar to the one presented in this post, we are firm believers that observability should be solved with a standards-based approach. We invite everyone to join the discussion on what is the right way to do this for Ent.

Wrap-Up#

We started this post by presenting Prometheus, a popular open-source monitoring solution. Next, we reviewed “Hooks”, a feature of Ent that allows adding custom logic before and after operations that change the data entities. We then showed how to integrate the two to create observable applications using Ent. Finally, we discussed the future of observability in Ent and invited everyone to join the discussion to shape it.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

バージョン0.9.0でUpsert APIを追加しました!

前回のリリースから約4ヶ月空いてしまったのは、正当な理由があります。 本日リリースされたバージョン0.9.0には、待望の機能がいくつか搭載されています。 その中でも、1年半以上前から議論されてきた機能であり、Entのユーザーアンケートでも最も要望の多かった機能の一つである「Upsert API」は、その筆頭に挙げられるでしょう!

バージョン0.9.0では、新しい機能フラグsql/upsertを使用して「Upsert」スタイルのステートメントをサポートしています。 Entには 機能フラグ群があり、これをオンにすることで Entによって生成されたコードにさらに機能を追加できます。 機能フラグは、必ずしもすべてのプロジェクトで必要とされているわけではない機能へのオプトインを可能にするメカニズムとして、また、いつかEntのコアの一部になるかもしれない機能の実験を行うための方法として使用されています。

この記事では、この新機能と、それが役立つ場所を紹介し、その使用方法を実演します。

Upsert#

「Upsert」とは、データシステムでよく使われる用語で、「update」と「insert」を組み合わせたものです。通常、テーブルにレコードを挿入しようとし、一意性制約に違反した場合(たとえば、そのIDのレコードが既に存在する場合) 、代わりにそのレコードが更新されるステートメントを指します。 一般的なリレーショナルデータベースでは、特定のUPSERT文はありませんが、ほとんどのデータベースがこの種の動作を実現する方法をサポートしています。

たとえば、SQLite データベースにこのような定義のテーブルがあるとします:

CREATE TABLE users (
id integer PRIMARY KEY AUTOINCREMENT,
email varchar(255) UNIQUE,
name varchar(255)
)

同じインサートを2回実行しようとすると、

INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');
INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');

次のエラーが表示されます:

[2021-08-05 06:49:22] UNIQUE constraint failed: users.email

多くの場合、書き込み操作は、システムを同じ状態にしたまま何度も連続して実行することができるという、べき等性(idempotent) があると便利です。

また、レコードを作成しようとする前に、そのレコードが存在するかどうかを問い合わせるのは好ましくない場合もあります。 このような状況のために、SQLite は INSERT文の ON CONFLICTをサポートしています。 既存の値を新しい値で上書きするようにSQLiteに指示するには、次のように実行します。

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT (email) DO UPDATE SET email=excluded.email, name=excluded.name;

既存の値を保持したい場合は、 DO NOTHING のコンフリクトアクションを使用できます。

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT DO NOTHING;

2つのバージョンを何らかの方法で統合したい場合は、DO UPDATEアクションを少し違った方法で使用して、次のようにします:

INSERT INTO users (email, full_name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT (email) DO UPDATE SET name=excluded.name || ' (formerly: ' || users.name || ')'

この場合、2回目のINSERTの後、nameカラムの値は次のようになります。Tamir, Rotem (formerly: Rotem Tamir)となります。 あまり便利ではありませんが、この方法でクールなことができることがおわかりいただけると思います。

Ent で Upsert#

前述のusersテーブルと同様のエンティティを持つ既存のEntプロジェクトがあるとします

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email").
Unique(),
field.String("name"),
}
}

Upsert API は新しくリリースされた機能であるため、以下を使用して ent のバージョンを更新してください。

go get -u entgo.io/ent@v0.9.0

次に、 ent/generate.go のコード生成フラグに sql/upsert 機能フラグを追加します。

package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature sql/upsert ./schema

次に、プロジェクトのコード生成を再実行します。

go generate ./...

OnConflict という新しいメソッドが ent/user_create.go ファイルに追加されたことを確認します:

// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
// of the `INSERT` statement. For example:
//
// client.User.Create().
// SetEmailAddress(v).
// OnConflict(
// // Update the row with the new values
// // the was proposed for insertion.
// sql.ResolveWithNewValues(),
// ).
// // Override some of the fields with custom
// // update values.
// Update(func(u *ent.UserUpsert) {
// SetEmailAddress(v+v)
// }).
// Exec(ctx)
//
func (uc *UserCreate) OnConflict(opts ...sql.ConflictOption) *UserUpsertOne {
uc.conflict = opts
return &UserUpsertOne{
create: uc,
}
}

このコード(および生成された新しいコード) は、UserエンティティのUpsert動作を実現するのに役立ちます。 これを調べるために、まず一意性制約のエラーを再現するテストを書いてみましょう:

func TestUniqueConstraintFails(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// 最初にユーザーを作成します
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// 次に、同じメールアドレスを持ったユーザーを作成してみます
_, err := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
Save(ctx)
if !ent.IsConstraintError(err) {
log.Fatalf("expected second created to fail with constraint error")
}
log.Printf("second query failed with: %v", err)
}

テストはpassしました:

=== RUN TestUniqueConstraintFails
2021/08/05 07:12:11 second query failed with: ent: constraint failed: insert node to table "users": UNIQUE constraint failed: users.email
--- PASS: TestUniqueConstraintFails (0.00s)

次に、コンフリクトが発生した場合に、Ent に既存の値を上書きするよう指示する方法を見てみましょう。

func TestUpsertReplace(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// 最初にユーザーを作成します
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// 次に、同じメールアドレスを持ったユーザーを作成してみます
// 今回は、ON CONFLICT 時の動作として、
// `UpdateNewValues` modifierを指定します
newID := client.User.Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
UpdateNewValues().
// IDXメソッドを使用して、作成/更新されたエンティティの
// IDを受け取ることができます
IDX(ctx)
// 最初に作成されたユーザーのIDと、今更新されたユーザーの
// IDが同じであることを期待しています。
if orig.ID != newID {
log.Fatalf("expected upsert to update an existing record")
}
current := client.User.GetX(ctx, orig.ID)
if current.Name != "Tamir, Rotem" {
log.Fatalf("expected upsert to replace with the new values")
}
}

テストの実行:

=== RUN TestUpsertReplace
--- PASS: TestUpsertReplace (0.00s)

代わりに、 Ignore modifier を使用して、Ent がコンフリクトを解決する際に古いバージョンを維持するように指示することもできます。 次のようなテストを書いてみましょう:

func TestUpsertIgnore(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()
// 最初にユーザーを作成します
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)
// 次に、同じメールアドレスを持ったユーザーを作成してみます
// 今回は、ON CONFLICT 時の動作として、
// `Ignore` modifierを指定します
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
Ignore().
ExecX(ctx)
current := client.User.GetX(ctx, orig.ID)
if current.FullName != orig.FullName {
log.Fatalf("expected upsert to keep the original version")
}
}

この機能の詳細については、 Feature Flag または Upsert API のドキュメントを参照してください。

まとめ#

この記事では、Ent v0.9.0で機能フラグの指定によって利用可能になった、待望の機能であるUpsert APIを紹介しました。 Upsertがアプリケーションでよく使われる場面と、一般的なRDBMSで実装する方法について説明しました。 最後に、Entを使ってUpsert APIを使い始めるための簡単な例を示しました。

さらにご質問がありますか? 始めるにあたって助けが必要ですか? Slack チャンネル に参加してください。

より多くのEntのニュースと最新情報をお届けします

Ent で十分に機能する Go CRUD HTTP API を生成する

Entのコアな原則の1つである「Schema as Code」は、「エンティティとそのエッジを定義するEntのDSLは、Goのコードを使う」という以上の意味を持っています。 他の多くのORMと比較した場合、Entのユニークなアプローチは、エンティティに関連するすべてのロジックを、コードとして、直接、スキーマ定義で表現できます。

Entでは、開発者はすべてのauthorization ロジック(EntではPrivacy と呼びます) とすべてのの変更の副作用(EntではHooksと呼びます) をスキーマに直接書き込むことができます。 すべてを同じ場所に置くことができるだけでも非常に便利ですが、その真の力はコード生成と組み合わせたときに発揮されます。

スキーマがこのように定義されていれば、完全に機能するプロダクショングレードのサーバーコードを自動的に生成することが可能になります。 権限決定やカスタム副作用の責任をRPC層からデータ層に移すと、基本的なCRUD(Create、Read、Update、Delete) エンドポイントの実装は、機械的に生成できる程度に汎用的になります。 これこそが、人気のGraphQLやgRPCのEnt拡張機能の背景にある考え方です。

今日は、Entスキーマから十分に機能するRESTfulなAPIエンドポイントを自動的に生成する、elkという新しいEnt拡張をご紹介します。 elkは、グラフに追加するすべてのエンティティの基本的なCRUDエンドポイントを設定するための面倒な作業をすべて自動化することを目指しています。リフレクションを排除し、型安全性を維持しながら、ロギング、リクエストボディの検証、リレーションのイーガーローディング、シリアライズなどを行います。

さあ、始めましょう!

はじめに#

このコードの完成系は GitHubにあります

新しいGoプロジェクトを作成することから始めましょう:

mkdir elk-example
cd elk-example
go mod init elk-example

Entのコードジェネレータを呼び出し、2つのスキーマを作成します:User, Pet:

go run -mod=mod entgo.io/ent/cmd/ent init Pet User

プロジェクトはこのようになります:

.
├── ent
│ ├── generate.go
│ └── schema
│ ├── pet.go
│ └── user.go
├── go.mod
└── go.sum

次に、elkパッケージをプロジェクトに追加してみましょう。

go get -u github.com/masseelch/elk

elkは、Entのコード生成と統合するために、Entのextension APIを使用しています。 ここで説明されている通りに、entc(ent codegen) パッケージを使用する必要があります。 次の3つのステップに従って、このパッケージを有効にし、Entがelk拡張機能と連携できるように設定してください:

1. ent/entc.goという名前の新しいGoファイルを作成し、以下の内容を貼り付けます。

// +build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
elk.GenerateHandlers(),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. ent/generate.goファイルを編集して、ent/entc.goファイルを実行します。

package ent
//go:generate go run -mod=mod entc.go

3/. elk は生成するコードにいくつかの外部パッケージを使用します。 現在、elk のセットアップ時に一度、手動でそれらのパッケージを取得する必要があります。

go get github.com/mailru/easyjson github.com/masseelch/render github.com/go-chi/chi/v5 go.uber.org/zap

これらのステップが完了すると、elkを搭載したentを使用するためのすべてのセットアップが完了します。 Entの詳細、さまざまなデータベースへの接続方法、マイグレーションの実行、エンティティの操作については、セットアップチュートリアルをご覧ください。

elk で HTTP CRUD ハンドラを生成する#

完全に動作する HTTP ハンドラを生成するには、最初に Ent スキーマ定義を作成する必要があります。 ent/schema/pet.goを開いて、編集しましょう:

package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)
// Pet holds the schema definition for the Pet entity.
type Pet struct {
ent.Schema
}
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}

Petのエンティティにnameageの2つのフィールドを追加しました。 ent.Schemaでは、エンティティのフィールドを定義するだけです。 このスキーマから実行可能なコードを生成するには、次のコマンドを実行します:

go generate ./...

Entがいつも生成するファイルに加えて、ent/httpという別のディレクトリが作成されていることに注目してください。 これらのファイルはelk拡張機能によって生成されたもので、生成された HTTP ハンドラのコードが含まれています。 例えば、Pet エンティティに対する読み取り操作のために生成されたコードの一部を以下に示します:

const (
PetCreate Routes = 1 << iota
PetRead
PetUpdate
PetDelete
PetList
PetRoutes = 1<<iota - 1
)
// PetHandler handles http crud operations on ent.Pet.
type PetHandler struct {
handler
client *ent.Client
log *zap.Logger
}
func NewPetHandler(c *ent.Client, l *zap.Logger) *PetHandler {
return &PetHandler{
client: c,
log: l.With(zap.String("handler", "PetHandler")),
}
}
// Read fetches the ent.Pet identified by a given url-parameter from the
// database and renders it to the client.
func (h *PetHandler) Read(w http.ResponseWriter, r *http.Request) {
l := h.log.With(zap.String("method", "Read"))
// ID is URL parameter.
id, err := strconv.Atoi(chi.URLParam(r, "id"))
if err != nil {
l.Error("error getting id from url parameter", zap.String("id", chi.URLParam(r, "id")), zap.Error(err))
render.BadRequest(w, r, "id must be an integer greater zero")
return
}
// Create the query to fetch the Pet
q := h.client.Pet.Query().Where(pet.ID(id))
e, err := q.Only(r.Context())
if err != nil {
switch {
case ent.IsNotFound(err):
msg := stripEntError(err)
l.Info(msg, zap.Error(err), zap.Int("id", id))
render.NotFound(w, r, msg)
case ent.IsNotSingular(err):
msg := stripEntError(err)
l.Error(msg, zap.Error(err), zap.Int("id", id))
render.BadRequest(w, r, msg)
default:
l.Error("could not read pet", zap.Error(err), zap.Int("id", id))
render.InternalServerError(w, r, nil)
}
return
}
l.Info("pet rendered", zap.Int("id", id))
easyjson.MarshalToHTTPResponseWriter(NewPet2657988899View(e), w)
}

次に、Petエンティティを管理するためのRESTful HTTP サーバーを作成する方法を見てみましょう。 ent/main.goという名前の新しいGoファイルを作成し、以下の内容を加えます:

package main
import (
"context"
"fmt"
"log"
"net/http"
"elk-example/ent"
elk "elk-example/ent/http"
"github.com/go-chi/chi/v5"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)
func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "./ent.db?_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// 自動マイグレーションの実行
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Router and Logger.
r, l := chi.NewRouter(), zap.NewExample()
// Create the pet handler.
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l).Mount(r, elk.PetRoutes)
})
// Start listen to incoming requests.
fmt.Println("Server running")
defer fmt.Println("Server stopped")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}

次に、サーバーを起動します:

go run -mod=mod main.go

おめでとうございます! これで、Pets APIを提供する実行中のサーバーを手に入れました。 データベース内のすべてのペットの一覧をリクエストできますが、まだ何もありません。 まず一匹作成しましょう:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Kuro","age":3}' 'localhost:8080/pets'

次のようなレスポンスが返ってくると思います:

{
"age": 3,
"id": 1,
"name": "Kuro"
}

サーバーが起動しているターミナルに移動すると、 elkにロギングが組み込まれていることがわかります

{
"level": "info",
"msg": "pet rendered",
"handler": "PetHandler",
"method": "Create",
"id": 1
}

elkzapをロギングに使用します。 詳細については、zapのドキュメントをご覧ください。

リレーション#

elkの機能をもっと説明するために、グラフを拡張しましょう。 ent/schema/user.goent/schema/pet.goを編集しましょう:

ent/schema/pet.go
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique(),
}
}
ent/schema/user.go
package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type),
}
}

これで、PetスキーマとUserスキーマの間に一対多の関係ができました。PetはUserに属し、Userは複数のPetを持つことができます。

コードジェネレータを再実行:

go generate ./...

ルーターに UserHandler を登録することを忘れないでください。 main.go に次の行を追加するだけです:

[...]
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l, v).Mount(r, elk.PetRoutes)
})
+ // ユーザーハンドラーの作成
+ r.Route("/users", func(r chi.Router) {
+ elk.NewUserHandler(c, l, v).Mount(r, elk.UserRoutes)
+ })
// リクエストの待ち受けを開始
fmt.Println("Server running")
[...]

サーバーを再起動すると、以前に作成したKuroという名前のペットを所有する User を作成できます:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Elk","age":30,"owner":1}' 'localhost:8080/users'

サーバは以下の応答を返します:

{
"age": 30,
"edges": {},
"id": 1,
"name": "Elk"
}

出力結果から、ユーザーが作成されていることがわかりますが、エッジは空です。 elkは、デフォルトではエッジを出力に含めません。 エッジをレンダリングするようにelkを設定するには、"serialization groups "と呼ばれる機能を使用します。 elk.SchemaAnnotation構造体とelk.Annotation構造体を使用して、スキーマに注釈を付けます。 ent/schema/user.goを編集して、これらを追加してください。

// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type).
Annotations(elk.Groups("user")),
}
}
// Annotations of the User.
func (User) Annotations() []schema.Annotation {
return []schema.Annotation{elk.ReadGroups("user")}
}

フィールドとエッジに追加されたelk.Annotationは、"user"グループがリクエストされた場合、それらをイーガーロードしてペイロードに追加するようelkに指示します。 elk.SchemaAnnotationは、UserHandlerリクエストのRead-Operationを"user"とするために使用されます。 Serialization groupsが添付されていないフィールドは、デフォルトで含まれていることに注意してください。 ただし、特に設定しない限り、 Edgeは除外されます。

次に、もう一度コードを再生成し、サーバーを再起動しましょう。 これで、レンダリングされたユーザーのペットが見えるはずです。

curl 'localhost:8080/users/1'
{
"age": 30,
"edges": {
"pets": [
{
"id": 1,
"name": "Kuro",
"age": 3,
"edges": {}
}
]
},
"id": 1,
"name": "Elk"
}

リクエストの検証#

現在のスキーマでは、ペットやユーザーにマイナスの年齢を設定することや、(Kuroで行ったように) オーナーのいないペットを作成することができてしまいます。 Entには、基本的なバリデーションのサポートが組み込まれています。 しかし、場合によっては、Entにペイロードを渡す前に、APIに対するリクエストを検証したいことがあります。 elkこの パッケージを使用して検証ルールを定義し、データを検証します。 elk.Annotation を使用して、作成と更新操作のバリデーションルールを個別に設定できます。 今回の例では、Petスキーマで0以上の年齢のみを許可し、所有者のいないペットの作成を却下するとします。 ent/schema/pet.go を編集しましょう:

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age").
Positive().
Annotations(
elk.CreateValidation("required,gt=0"),
elk.UpdateValidation("gt=0"),
),
}
}
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique().
Required().
Annotations(elk.Validation("required")),
}
}

次に、コードを再生成し、サーバーを再起動します。 新しい検証ルールをテストするために、マイナスの年齢かつ所有者がいないペットを作成してみましょう:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Bob","age":-2}' 'localhost:8080/pets'

elk は、どの検証に失敗したかについての情報を含む詳細なレスポンスを返します:

{
"code": 400,
"status": "Bad Request",
"errors": {
"Age": "This value failed validation on 'gt:0'.",
"Owner": "This value is required."
}
}

大文字のフィールド名に注目してください。 Validatorパッケージは、構造体のフィールド名を使用して検証エラーを生成しますが、にあるように、これを簡単にオーバーライドできます。

検証ルールを何も定義しなければ、elkは生成するコードに検証コードを含めません。 elksのリクエストバリデーションは、フィールドをまたいだ検証を行いたい場合に特に便利です。

今後追加される機能#

elkにはすでにいくつかの便利な機能がありますが、まだまだエキサイティングなことがたくさんあります。 次のバージョンの elk には以下が含まれます:

  • ノードを管理するための十分に機能するflutterのフロントエンド
  • 現在のリクエストバリデーターとEntのバリデーションの統合
  • 他のトランスポートフォーマット(現在はJSONのみ)

まとめ#

この記事では、elk ができることのほんの一部を紹介しました。 さらに用例を見たいなら、GitHubのプロジェクトのREADMEをご覧ください。 elk を搭載したEnt で、あなたやあなたの仲間の開発者が、RESTful APIの構築に必要な反復的な作業を自動化し、より意味のある仕事に集中できるようになることを願っています。

elk は開発の初期段階にありますので、ご提案やフィードバックをお待ちしています。ご協力いただける場合は、大変嬉しく思います。 ヘルプ、フィードバック、提案、貢献のために、GitHub Issuesをご活用ください。

著者について#

MasseElchは、風が強く、平坦なドイツ北部出身のソフトウェアエンジニアです。 愛犬のKuro(自分のInstagramチャンネルを持っています 😱 ) とハイキングをしたり、息子とかくれんぼをしたりする時間以外は、コーヒーを飲みながらコーディングを楽しんでいます。