Skip to main content

Β· 6 min read

The OpenAPI Specification (OAS, formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS document.

In a previous blogpost, we presented to you a new feature of the Ent extension elk: a fully compliant OpenAPI Specification document generator.

Today, we are very happy to announce, that the specification generator is now an official extension to the Ent project and has been moved to the ent/contrib repository. In addition, we have listened to the feedback of the community and have made some changes to the generator, that we hope you will like.

Getting Started

To use the entoas extension use the entc (ent codegen) package as described here. First install the extension to your Go module:

go get entgo.io/contrib/entoas

Now follow the next two steps to enable it and to configure Ent to work with the entoas extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore

package main

import (
"log"

"entgo.io/contrib/entoas"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)

func main() {
ex, err := entoas.NewExtension()
if err != nil {
log.Fatalf("creating entoas extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent

//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS document from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS document

The first step on our way to the OAS document is to create an Ent schema graph. For the sake of brevity here is an example schema to use:

ent/schema/schema.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}

// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}

// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}

// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}

// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}

// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

The code above is the Ent-way to describe a schema-graph. In this particular case we created three Entities: Fridge, Compartment and Item. Additionally, we added some edges to the graph: A Fridge can have many Compartments and a Compartment can contain many Items.

Now run the code generator:

go generate ./...

In addition to the files Ent normally generates, another file named ent/openapi.json has been created. Here is a sneak peek into the file:

ent/openapi.json
{
"info": {
"title": "Ent Schema API",
"description": "This is an auto generated API description made out of an Ent schema definition",
"termsOfService": "",
"contact": {},
"license": {
"name": ""
},
"version": "0.0.0"
},
"paths": {
"/compartments": {
"get": {
[...]

If you feel like it, copy its contents and paste them into the Swagger Editor. It should look like this:

Swagger Editor

Swagger Editor

Basic Configuration

The description of our API does not yet reflect what it does, but entoas lets you change that! Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore

package main

import (
"log"

"entgo.io/contrib/entoas"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)

func main() {
ex, err := entoas.NewExtension(
entoas.SpecTitle("Fridge CMS"),
entoas.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
entoas.SpecVersion("0.0.1"),
)
if err != nil {
log.Fatalf("creating entoas extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS document.

ent/openapi.json
{
"info": {
"title": "Fridge CMS",
"description": "API to manage fridges and their cooled contents. **ICY!**",
"termsOfService": "",
"contact": {},
"license": {
"name": ""
},
"version": "0.0.1"
},
"paths": {
"/compartments": {
"get": {
[...]

Operation configuration

There are times when you do not want to generate endpoints for every operation for every node. Fortunately, entoas lets us configure what endpoints to generate and which to ignore. entoas' default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell entoas to exclude a specific operation by using an entoas.Annotation. Policies are used to enable / disable the generation of sub-resource operations as well:

ent/schema/fridge.go
// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type).
// Do not generate an endpoint for POST /fridges/{id}/compartments
Annotation(
entoas.CreateOperation(
entoas.OperationPolicy(entoas.PolicyExclude),
),
),
}
}

// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
// Do not generate an endpoint for DELETE /fridges/{id}
entoas.DeleteOperation(entoas.OperationPolicy(entoas.PolicyExclude)),
}
}

And voilΓ ! the operations are gone.

For more information about how entoas's policies work and what you can do with it, have a look at the godoc.

Simple Models

By default entoas generates one response-schema per endpoint. To learn about the naming strategy have a look at the godoc.

One Schema per Endpoint

One Schema per Endpoint

Many users have requested to change this behaviour to simply map the Ent schema to the OAS document. Therefore, you now can configure entoas to do that:

ex, err := entoas.NewExtension(
entoas.SpecTitle("Fridge CMS"),
entoas.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
entoas.SpecVersion("0.0.1"),
entoas.SimpleModels(),
)
Simple Schemas

Simple Schemas

Wrapping Up

In this post we announced entoas, the official integration of the former elk OpenAPI Specification generation into Ent. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Β· 8 min read

One of the common questions we get from the Ent community is how to synchronize objects or references between the database backing an Ent application (e.g. MySQL or PostgreSQL) with external services. For example, users would like to create or delete a record from within their CRM when a user is created or deleted in Ent, publish a message to a Pub/Sub system when an entity is updated, or verify references to blobs in object storage such as AWS S3 or Google Cloud Storage.

Ensuring consistency between two separate data systems is not a simple task. When we want to propagate, for example, the deletion of a record in one system to another, there is no obvious way to guarantee that the two systems will end in a synchronized state, since one of them may fail, and the network link between them may be slow or down. Having said that, and especially with the prominence of microservices-architectures, these problems have become more common, and distributed systems researchers have come up with patterns to solve them, such as the Saga Pattern.

The application of these patterns is usually complex and difficult, and so in many cases architects do not go after a "perfect" design, and instead go after simpler solutions that involve either the acceptance of some inconsistency between the systems or background reconciliation procedures.

In this post, we will not discuss how to solve distributed transactions or implement the Saga pattern with Ent. Instead, we will limit our scope to study how to hook into Ent mutations before and after they occur, and run our custom logic there.

Propagating Mutations to External Systems

In our example, we are going to create a simple User schema with 2 immutable string fields, "name" and "avatar_url". Let's run the ent init command for creating a skeleton schema for our User:

go run entgo.io/ent/cmd/ent init User

Then, add the name and the avatar_url fields and run go generate to generate the assets.

ent/schema/user.go
type User struct {
ent.Schema
}

func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Immutable(),
field.String("avatar_url").
Immutable(),
}
}
go generate ./ent

The Problem

The avatar_url field defines a URL to an image in a bucket on our object storage (e.g. AWS S3). For the purpose of this discussion we want to make sure that:

  • When a user is created, an image with the URL stored in "avatar_url" exists in our bucket.
  • Orphan images are deleted from the bucket. This means that when a user is deleted from our system, its avatar image is deleted as well.

For interacting with blobs, we will use the gocloud.dev/blob package. This package provides abstraction for reading, writing, deleting and listing blobs in a bucket. Similar to the database/sql package, it allows interacting with variety of object storages with the same API by configuring its driver URL. For example:

// Open an in-memory bucket. 
if bucket, err := blob.OpenBucket(ctx, "mem://photos/"); err != nil {
log.Fatal("failed opening in-memory bucket:", err)
}

// Open an S3 bucket named photos.
if bucket, err := blob.OpenBucket(ctx, "s3://photos"); err != nil {
log.Fatal("failed opening s3 bucket:", err)
}

// Open a bucket named photos in Google Cloud Storage.
if bucket, err := blob.OpenBucket(ctx, "gs://my-bucket"); err != nil {
log.Fatal("failed opening gs bucket:", err)
}

Schema Hooks

Hooks are a powerful feature of Ent that allows adding custom logic before and after operations that mutate the graph.

Hooks can be either defined dynamically using client.Use (called "Runtime Hooks"), or explicitly on the schema (called "Schema Hooks") as follows:

// Hooks of the User.
func (User) Hooks() []ent.Hook {
return []ent.Hook{
EnsureImageExists(),
DeleteOrphans(),
}
}

As you can imagine, the EnsureImageExists hook will be responsible for ensuring that when a user is created, their avatar URL exists in the bucket, and the DeleteOrphans will ensure that orphan images are deleted. Let's start writing them.

ent/schema/hooks.go
func EnsureImageExists() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
avatarURL, exists := m.AvatarURL()
if !exists {
return nil, errors.New("avatar field is missing")
}
// TODO:
// 1. Verify that "avatarURL" points to a real object in the bucket.
// 2. Otherwise, fail.
return next.Mutate(ctx, m)
})
}
// Limit the hook only to "Create" operations.
return hook.On(hk, ent.OpCreate)
}

func DeleteOrphans() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
id, exists := m.ID()
if !exists {
return nil, errors.New("id field is missing")
}
// TODO:
// 1. Get the AvatarURL field of the deleted user.
// 2. Cascade the deletion to object storage.
return next.Mutate(ctx, m)
})
}
// Limit the hook only to "DeleteOne" operations.
return hook.On(hk, ent.OpDeleteOne)
}

Now, you may ask yourself, how do we access the blob client from the mutations hooks? You are going to find out in the next section.

Injecting Dependencies

The entc.Dependency option allows extending the generated builders with external dependencies as struct fields, and provides options for injecting them on client initialization.

To inject a blob.Bucket to be available inside our hooks, we can follow the tutorial about external dependencies in the website, and define the gocloud.dev/blob.Bucket as a dependency.

ent/entc.go
func main() {
opts := []entc.Option{
entc.Dependency(
entc.DependencyName("Bucket"),
entc.DependencyType(&blob.Bucket{}),
),
}
if err := entc.Generate("./schema", &gen.Config{}, opts...); err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Next, re-run code generation:

go generate ./ent

We can now access the Bucket API from all generated builders. Let's finish the implementations of the above hooks.

ent/schema/hooks.go
// EnsureImageExists ensures the avatar_url points
// to a real object in the bucket.
func EnsureImageExists() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
avatarURL, exists := m.AvatarURL()
if !exists {
return nil, errors.New("avatar field is missing")
}
switch exists, err := m.Bucket.Exists(ctx, avatarURL); {
case err != nil:
return nil, fmt.Errorf("check key existence: %w", err)
case !exists:
return nil, fmt.Errorf("key %q does not exist in the bucket", avatarURL)
default:
return next.Mutate(ctx, m)
}
})
}
return hook.On(hk, ent.OpCreate)
}

// DeleteOrphans cascades the user deletion to the bucket.
// Hence, when a user is deleted, its avatar image is deleted
// as well.
func DeleteOrphans() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
id, exists := m.ID()
if !exists {
return nil, errors.New("id field is missing")
}
u, err := m.Client().User.Get(ctx, id)
if err != nil {
return nil, fmt.Errorf("getting deleted user: %w", err)
}
if err := m.Bucket.Delete(ctx, u.AvatarURL); err != nil {
return nil, fmt.Errorf("deleting user avatar from bucket: %w", err)
}
return next.Mutate(ctx, m)
})
}
return hook.On(hk, ent.OpDeleteOne)
}

Now, it's time to test our hooks! Let's write a testable example that verifies that our 2 hooks work as expected.

package main

import (
"context"
"fmt"
"log"

"github.com/a8m/ent-sync-example/ent"
_ "github.com/a8m/ent-sync-example/ent/runtime"

"entgo.io/ent/dialect"
_ "github.com/mattn/go-sqlite3"
"gocloud.dev/blob"
_ "gocloud.dev/blob/memblob"
)

func Example_SyncCreate() {
ctx := context.Background()
// Open an in-memory bucket.
bucket, err := blob.OpenBucket(ctx, "mem://photos/")
if err != nil {
log.Fatal("failed opening bucket:", err)
}
client, err := ent.Open(
dialect.SQLite,
"file:ent?mode=memory&cache=shared&_fk=1",
// Inject the blob.Bucket on client initialization.
ent.Bucket(bucket),
)
if err != nil {
log.Fatal("failed opening connection to sqlite:", err)
}
defer client.Close()
if err := client.Schema.Create(ctx); err != nil {
log.Fatal("failed creating schema resources:", err)
}
if err := client.User.Create().SetName("a8m").SetAvatarURL("a8m.png").Exec(ctx); err == nil {
log.Fatal("expect user creation to fail because the image does not exist in the bucket")
}
if err := bucket.WriteAll(ctx, "a8m.png", []byte{255, 255, 255}, nil); err != nil {
log.Fatalf("failed uploading image to the bucket: %v", err)
}
fmt.Printf("%q\n", keys(ctx, bucket))

// User creation should pass as image was uploaded to the bucket.
u := client.User.Create().SetName("a8m").SetAvatarURL("a8m.png").SaveX(ctx)

// Deleting a user, should delete also its image from the bucket.
client.User.DeleteOne(u).ExecX(ctx)
fmt.Printf("%q\n", keys(ctx, bucket))

// Output:
// ["a8m.png"]
// []
}

Wrapping Up

Great! We have configured Ent to extend our generated code and inject the blob.Bucket as an External Dependency. Next, we defined two mutation hooks and used the blob.Bucket API to ensure our product constraints are satisfied.

The code for this example is available at github.com/a8m/ent-sync-example.

For more Ent news and updates:

Β· 4 min read

Ent is a powerful Entity framework that helps developers write neat code that is translated into (possibly complex) database queries. As the usage of your application grows, it doesn’t take long until you stumble upon performance issues with your database. Troubleshooting database performance issues is notoriously hard, especially when you’re not equipped with the right tools.

The following example shows how Ent query code is translated into an SQL query.

ent example 1

Example 1 - ent code is translated to SQL query

Traditionally, it has been very difficult to correlate between poorly performing database queries and the application code that is generating them. Database performance analysis tools could help point out slow queries by analyzing database server logs, but how could they be traced back to the application?

Sqlcommenter

Earlier this year, Google introduced Sqlcommenter. Sqlcommenter is

an open source library that addresses the gap between the ORM libraries and understanding database performance. Sqlcommenter gives application developers visibility into which application code is generating slow queries and maps application traces to database query plans

In other words, Sqlcommenter adds application context metadata to SQL queries. This information can then be used to provide meaningful insights. It does so by adding SQL comments to the query that carry metadata but are ignored by the database during query execution. For example, the following query contains a comment that carries metadata about the application that issued it (users-mgr), which controller and route triggered it (users and user_rename, respectively), and the database driver that was used (ent:v0.9.1):

update users set username = β€˜hedwigz’ where id = 88
/*application='users-mgr',controller='users',route='user_rename',db_driver='ent:v0.9.1'*/

To get a taste of how the analysis of metadata collected from Sqlcommenter metadata can help us better understand performance issues of our application, consider the following example: Google Cloud recently launched Cloud SQL Insights, a cloud-based SQL performance analysis product. In the image below, we see a screenshot from the Cloud SQL Insights Dashboard that shows that the HTTP route 'api/users' is causing many locks on the database. We can also see that this query got called 16,067 times in the last 6 hours.

Cloud SQL insights

Screenshot from Cloud SQL Insights Dashboard

This is the power of SQL tags - they provide you correlation between your application-level information and your Database monitors.

sqlcomment

sqlcomment is an Ent driver that adds metadata to SQL queries using comments following the sqlcommenter specification. By wrapping an existing Ent driver with sqlcomment, users can leverage any tool that supports the standard to triage query performance issues. Without further ado, let’s see sqlcomment in action.

First, to install sqlcomment run:

go get ariga.io/sqlcomment

sqlcomment is wrapping an underlying SQL driver, therefore, we need to open our SQL connection using ent’s sql module, instead of Ent's popular helper ent.Open.

info

Make sure to import entgo.io/ent/dialect/sql in the following snippet

// Create db driver.
db, err := sql.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("Failed to connect to database: %v", err)
}

// Create sqlcomment driver which wraps sqlite driver.
drv := sqlcomment.NewDriver(db,
sqlcomment.WithDriverVerTag(),
sqlcomment.WithTags(sqlcomment.Tags{
sqlcomment.KeyApplication: "my-app",
sqlcomment.KeyFramework: "net/http",
}),
)

// Create and configure ent client.
client := ent.NewClient(ent.Driver(drv))

Now, whenever we execute a query, sqlcomment will suffix our SQL query with the tags we set up. If we were to run the following query:

client.User.
Update().
Where(
user.Or(
user.AgeGT(30),
user.Name("bar"),
),
user.HasFollowers(),
).
SetName("foo").
Save()

Ent would output the following commented SQL query:

UPDATE `users`
SET `name` = ?
WHERE (
`users`.`age` > ?
OR `users`.`name` = ?
)
AND `users`.`id` IN (
SELECT `user_following`.`follower_id`
FROM `user_following`
)
/*application='my-app',db_driver='ent:v0.9.1',framework='net%2Fhttp'*/

As you can see, Ent outputted an SQL query with a comment at the end, containing all the relevant information associated with that query.

sqlcomment supports more tags, and has integrations with OpenTelemetry and OpenCensus. To see more examples and scenarios, please visit the github repo.

Wrapping-Up

In this post I showed how adding metadata to queries using SQL comments can help correlate between source code and database queries. Next, I introduced sqlcomment - an Ent driver that adds SQL tags to all of your queries. Finally, I got to see sqlcomment in action, by installing and configuring it with Ent. If you like the code and/or want to contribute - feel free to checkout the project on GitHub.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Β· 7 min read

While working on Ariga's operational data graph query engine, we saw the opportunity to greatly improve the performance of many use cases by building a robust caching library. As heavy users of Ent, it was only natural for us to implement this layer as an extension to Ent. In this post, I will briefly explain what caches are, how they fit into software architectures, and present entcache - a cache driver for Ent.

Caching is a popular strategy for improving application performance. It is based on the observation that the speed for retrieving data using different types of media can vary within many orders of magnitude. Jeff Dean famously presented the following numbers in a lecture about "Software Engineering Advice from Building Large-Scale Distributed Systems":

cache numbers

These numbers show things that experienced software engineers know intuitively: reading from memory is faster than reading from disk, retrieving data from the same data center is faster than going out to the internet to fetch it. We add to that, that some calculations are expensive and slow, and that fetching a precomputed result can be much faster (and less expensive) than recomputing it every time.

The collective intelligence of Wikipedia tells us that a Cache is "a hardware or software component that stores data so that future requests for that data can be served faster". In other words, if we can store a query result in RAM, we can fulfill a request that depends on it much faster than if we need to go over the network to our database, have it read data from disk, run some computation on it, and only then send it back to us (over a network).

However, as software engineers, we should remember that caching is a notoriously complicated topic. As the phrase coined by early-day Netscape engineer Phil Karlton says: "There are only two hard things in Computer Science: cache invalidation and naming things". For instance, in systems that rely on strong consistency, a cache entry may be stale, therefore causing the system to behave incorrectly. For this reason, take great care and pay attention to detail when you are designing caches into your system architectures.

Presenting entcache

The entcache package provides its users with a new Ent driver that can wrap one of the existing SQL drivers available for Ent. On a high level, it decorates the Query method of the given driver, and for each call:

  1. Generates a cache key (i.e. hash) from its arguments (i.e. statement and parameters).

  2. Checks the cache to see if the results for this query are already available. If they are (this is called a cache-hit), the database is skipped and results are returned to the caller from memory.

  3. If the cache does not contain an entry for the query, the query is passed to the database.

  4. After the query is executed, the driver records the raw values of the returned rows (sql.Rows), and stores them in the cache with the generated cache key.

The package provides a variety of options to configure the TTL of the cache entries, control the hash function, provide custom and multi-level cache stores, evict and skip cache entries. See the full documentation in https://pkg.go.dev/ariga.io/entcache.

As we mentioned above, correctly configuring caching for an application is a delicate task, and so entcache provides developers with different caching levels that can be used with it:

  1. A context.Context-based cache. Usually, attached to a request and does not work with other cache levels. It is used to eliminate duplicate queries that are executed by the same request.

  2. A driver-level cache used by the ent.Client. An application usually creates a driver per database, and therefore, we treat it as a process-level cache.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different process.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that composed of an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Let's demonstrate this by explaining the context.Context based cache.

Context-Level Cache

The ContextLevel option configures the driver to work with a context.Context level cache. The context is usually attached to a request (e.g. *http.Request) and is not available in multi-level mode. When this option is used as a cache store, the attached context.Context carries an LRU cache (can be configured differently), and the driver stores and searches entries in the LRU cache when queries are executed.

This option is ideal for applications that require strong consistency, but still want to avoid executing duplicate database queries on the same request. For example, given the following GraphQL query:

query($ids: [ID!]!) {
nodes(ids: $ids) {
... on User {
id
name
todos {
id
owner {
id
name
}
}
}
}
}

A naive solution for resolving the above query will execute, 1 for getting N users, another N queries for getting the todos of each user, and a query for each todo item for getting its owner (read more about the N+1 Problem).

However, Ent provides a unique approach for resolving such queries(read more in Ent website) and therefore, only 3 queries will be executed in this case. 1 for getting N users, 1 for getting the todo items of all users, and 1 query for getting the owners of all todo items.

With entcache, the number of queries may be reduced to 2, as the first and last queries are identical (see code example).

context-level-cache

The different levels are explained in depth in the repository README.

Getting Started

If you are not familiar with how to set up a new Ent project, complete Ent Setting Up tutorial first.

First, go get the package using the following command.

go get ariga.io/entcache

After installing entcache, you can easily add it to your project with the snippet below:

// Open the database connection.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatal("opening database", err)
}
// Decorates the sql.Driver with entcache.Driver.
drv := entcache.NewDriver(db)
// Create an ent.Client.
client := ent.NewClient(ent.Driver(drv))

// Tell the entcache.Driver to skip the caching layer
// when running the schema migration.
if client.Schema.Create(entcache.Skip(ctx)); err != nil {
log.Fatal("running schema migration", err)
}

// Run queries.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}
// The query below is cached.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}

To see more advanced examples, head over to the repo's examples directory.

Wrapping Up

In this post, I presented β€œentcache” a new cache driver for Ent that I developed while working on Ariga's Operational Data Graph query engine. We started the discussion by briefly mentioning the motivation for including caches in software systems. Following that, we described the features and capabilities of entcache and concluded with a short example of how you can set it up in your application.

There are a few features we are working on, and wish to work on, but need help from the community to design them properly (solving cache invalidation, anyone? ;)). If you are interested to contribute, reach out to me on the Ent Slack channel.

For more Ent news and updates:

Β· 9 min read

A few months ago the Ent project announced the Schema Import Initiative, its goal is to help support many use cases for generating Ent schemas from external resources. Today, I'm happy to share a project I’ve been working on: entimport - an importent (pun intended) command line tool designed to create Ent schemas from existing SQL databases. This is a feature that has been requested by the community for some time, so I hope many people find it useful. It can help ease the transition of an existing setup from another language or ORM to Ent. It can also help with use cases where you would like to access the same data from different platforms (such as to automatically sync between them).
The first version supports both MySQL and PostgreSQL databases, with some limitations described below. Support for other relational databases such as SQLite is in the works.

Getting Started

To give you an idea of how entimport works, I want to share a quick example of end to end usage with a MySQL database. On a high-level, this is what we’re going to do:

  1. Create a Database and Schema - we want to show how entimport can generate an Ent schema for an existing database. We will first create a database, then define some tables in it that we can import into Ent.
  2. Initialize an Ent Project - we will use the Ent CLI to create the needed directory structure and an Ent schema generation script.
  3. Install entimport
  4. Run entimport against our demo database - next, we will import the database schema that we’ve created into our Ent project.
  5. Explain how to use Ent with our generated schemas.

Let's get started.

Create a Database

We’re going to start by creating a database. The way I prefer to do it is to use a Docker container. We will use a docker-compose which will automatically pass all needed parameters to the MySQL container.

Start the project in a new directory called entimport-example. Create a file named docker-compose.yaml and paste the following content inside:

version: "3.7"

services:

mysql8:
platform: linux/amd64
image: mysql
environment:
MYSQL_DATABASE: entimport
MYSQL_ROOT_PASSWORD: pass
healthcheck:
test: mysqladmin ping -ppass
ports:
- "3306:3306"

This file contains the service configuration for a MySQL docker container. Run it with the following command:

docker-compose up -d

Next, we will create a simple schema. For this example we will use a relation between two entities:

  • User
  • Car

Connect to the database using MySQL shell, you can do it with the following command:

Make sure you run it from the root project directory

docker-compose exec mysql8 mysql --database=entimport -ppass
create table users
(
id bigint auto_increment primary key,
age bigint not null,
name varchar(255) not null,
last_name varchar(255) null comment 'surname'
);

create table cars
(
id bigint auto_increment primary key,
model varchar(255) not null,
color varchar(255) not null,
engine_size mediumint not null,
user_id bigint null,
constraint cars_owners foreign key (user_id) references users (id) on delete set null
);

Let's validate that we've created the tables mentioned above, in your MySQL shell, run:

show tables;
+---------------------+
| Tables_in_entimport |
+---------------------+
| cars |
| users |
+---------------------+

We should see two tables: users & cars

Initialize Ent Project

Now that we've created our database, and a baseline schema to demonstrate our example, we need to create a Go project with Ent. In this phase I will explain how to do it. Since eventually we would like to use our imported schema, we need to create the Ent directory structure.

Initialize a new Go project inside a directory called entimport-example

go mod init entimport-example

Run Ent Init:

go run -mod=mod entgo.io/ent/cmd/ent init 

The project should look like this:

β”œβ”€β”€ docker-compose.yaml
β”œβ”€β”€ ent
β”‚ β”œβ”€β”€ generate.go
β”‚ └── schema
└── go.mod

Install entimport

OK, now the fun begins! We are finally ready to install entimport and see it in action.
Let’s start by running entimport:

go run -mod=mod ariga.io/entimport/cmd/entimport -h

entimport will be downloaded and the command will print:

Usage of entimport:
-dialect string
database dialect (default "mysql")
-dsn string
data source name (connection information)
-schema-path string
output path for ent schema (default "./ent/schema")
-tables value
comma-separated list of tables to inspect (all if empty)

Run entimport

We are now ready to import our MySQL schema to Ent!

We will do it with the following command:

This command will import all tables in our schema, you can also limit to specific tables using -tables flag.

go run ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

Like many unix tools, entimport doesn't print anything on a successful run. To verify that it ran properly, we will check the file system, and more specifically ent/schema directory.

β”œβ”€β”€ docker-compose.yaml
β”œβ”€β”€ ent
β”‚ β”œβ”€β”€ generate.go
β”‚ └── schema
β”‚ β”œβ”€β”€ car.go
β”‚ └── user.go
β”œβ”€β”€ go.mod
└── go.sum

Let’s see what this gives us - remember that we had two schemas: the users schema and the cars schema with a one to many relationship. Let’s see how entimport performed.

entimport-example/ent/schema/user.go
type User struct {
ent.Schema
}

func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.Int("age"), field.String("name"), field.String("last_name").Optional().Comment("surname")}
}
func (User) Edges() []ent.Edge {
return []ent.Edge{edge.To("cars", Car.Type)}
}
func (User) Annotations() []schema.Annotation {
return nil
}
entimport-example/ent/schema/car.go
type Car struct {
ent.Schema
}

func (Car) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.String("model"), field.String("color"), field.Int32("engine_size"), field.Int("user_id").Optional()}
}
func (Car) Edges() []ent.Edge {
return []ent.Edge{edge.From("user", User.Type).Ref("cars").Unique().Field("user_id")}
}
func (Car) Annotations() []schema.Annotation {
return nil
}

entimport successfully created entities and their relation!

So far looks good, now let’s actually try them out. First we must generate the Ent schema. We do it because Ent is a schema first ORM that generates Go code for interacting with different databases.

To run the Ent code generation:

go generate ./ent

Let's see our ent directory:

...
β”œβ”€β”€ ent
β”‚ β”œβ”€β”€ car
β”‚ β”‚ β”œβ”€β”€ car.go
β”‚ β”‚ └── where.go
...
β”‚ β”œβ”€β”€ schema
β”‚ β”‚ β”œβ”€β”€ car.go
β”‚ β”‚ └── user.go
...
β”‚ β”œβ”€β”€ user
β”‚ β”‚ β”œβ”€β”€ user.go
β”‚ β”‚ └── where.go
...

Ent Example

Let’s run a quick example to verify that our schema works:

Create a file named example.go in the root of the project, with the following content:

This part of the example can be found here

entimport-example/example.go
package main

import (
"context"
"fmt"
"log"

"entimport-example/ent"

"entgo.io/ent/dialect"
_ "github.com/go-sql-driver/mysql"
)

func main() {
client, err := ent.Open(dialect.MySQL, "root:pass@tcp(localhost:3306)/entimport?parseTime=True")
if err != nil {
log.Fatalf("failed opening connection to mysql: %v", err)
}
defer client.Close()
ctx := context.Background()
example(ctx, client)
}

Let's try to add a user, write the following code at the end of the file:

entimport-example/example.go
func example(ctx context.Context, client *ent.Client) {
// Create a User.
zeev := client.User.
Create().
SetAge(33).
SetName("Zeev").
SetLastName("Manilovich").
SaveX(ctx)
fmt.Println("User created:", zeev)
}

Then run:

go run example.go

This should output:

# User created: User(id=1, age=33, name=Zeev, last_name=Manilovich)

Let's check with the database if the user was really added

SELECT *
FROM users
WHERE name = 'Zeev';

+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

Great! now let's play a little more with Ent and add some relations, add the following code at the end of the example() func:

make sure you add "entimport-example/ent/user" to the import() declaration

entimport-example/example.go
// Create Car.
vw := client.Car.
Create().
SetModel("volkswagen").
SetColor("blue").
SetEngineSize(1400).
SaveX(ctx)
fmt.Println("First car created:", vw)

// Update the user - add the car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(vw).SaveX(ctx)

// Query all cars that belong to the user.
cars := zeev.QueryCars().AllX(ctx)
fmt.Println("User cars:", cars)

// Create a second Car.
delorean := client.Car.
Create().
SetModel("delorean").
SetColor("silver").
SetEngineSize(9999).
SaveX(ctx)
fmt.Println("Second car created:", delorean)

// Update the user - add another car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(delorean).SaveX(ctx)

// Traverse the sub-graph.
cars = delorean.
QueryUser().
QueryCars().
AllX(ctx)
fmt.Println("User cars:", cars)

This part of the example can be found here

Now do: go run example.go.
After Running the code above, the database should hold a user with 2 cars in a O2M relation.

SELECT *
FROM users;

+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

SELECT *
FROM cars;

+--+----------+------+-----------+-------+
|id|model |color |engine_size|user_id|
+--+----------+------+-----------+-------+
|1 |volkswagen|blue |1400 |1 |
|2 |delorean |silver|9999 |1 |
+--+----------+------+-----------+-------+

Syncing DB changes

Since we want to keep the database in sync, we want entimport to be able to change the schema after the database was changed. Let's see how it works.

Run the following SQL code to add a phone column with a unique index to the users table:

alter table users
add phone varchar(255) null;

create unique index users_phone_uindex
on users (phone);

The table should look like this:

describe users;
+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | bigint | NO | PRI | NULL | auto_increment |
| age | bigint | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | YES | | NULL | |
| phone | varchar(255) | YES | UNI | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Now let's run entimport again to get the latest schema from our database:

go run -mod=mod ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

We can see that the user.go file was changed:

entimport-example/ent/schema/user.go
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), ..., field.String("phone").Optional().Unique()}
}

Now we can run go generate ./ent again and use the new schema to add a phone to the User entity.

Future Plans

As mentioned above this initial version supports MySQL and PostgreSQL databases.
It also supports all types of SQL relations. I have plans to further upgrade the tool and add features such as missing PostgreSQL fields, default values, and more.

Wrapping Up

In this post, I presented entimport, a tool that was anticipated and requested many times by the Ent community. I showed an example of how to use it with Ent. This tool is another addition to Ent schema import tools, which are designed to make the integration of ent even easier. For discussion and support, open an issue. The full example can be found in here. I hope you found this blog post useful!

For more Ent news and updates:

Β· 8 min read

In a previous blogpost, we presented to you elk - an extension to Ent enabling you to generate a fully-working Go CRUD HTTP API from your schema. In the today's post I'd like to introduce to you a shiny new feature that recently made it into elk: a fully compliant OpenAPI Specification (OAS) generator.

OAS (formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS file.

Getting Started

The first step is to add the elk package to your project:

go get github.com/masseelch/elk@latest

elk uses the Ent Extension API to integrate with Ent’s code-generation. This requires that we use the entc (ent codegen) package as described here to generate code for our project. Follow the next two steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent

//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS file from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS file

The first step on our way to the OAS file is to create an Ent schema graph:

go run -mod=mod entgo.io/ent/cmd/ent init Fridge Compartment Item

To demonstrate elk's OAS generation capabilities, we will build together an example application. Suppose I have multiple fridges with multiple compartments, and my significant-other and I want to know its contents at all times. To supply ourselves with this incredibly useful information we will create a Go server with a RESTful API. To ease the creation of client applications that can communicate with our server, we will create an OpenAPI Specification file describing its API. Once we have that, we can build a frontend to manage fridges and contents in a language of our choice by using the Swagger Codegen! You can find an example that uses docker to generate a client here.

Let's create our schema:

ent/fridge.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}

// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}
ent/compartment.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}

// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}
ent/item.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}

// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

Now, let's generate the Ent code and the OAS file.

go generate ./...

In addition to the files Ent normally generates, another file named openapi.json has been created. Copy its contents and paste them into the Swagger Editor. You should see three groups: Compartment, Item and Fridge.

Swagger Editor Example

Swagger Editor Example

If you happen to open up the POST operation tab in the Fridge group, you see a description of the expected request data and all the possible responses. Great!

POST operation on Fridge

POST operation on Fridge

Basic Configuration

The description of our API does not yet reflect what it does, let's change that! elk provides easy-to-use configuration builders to manipulate the generated OAS file. Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
"openapi.json",
// It is a Content-Management-System ...
elk.SpecTitle("Fridge CMS"),
// You can use CommonMark syntax (https://commonmark.org/).
elk.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
elk.SpecVersion("0.0.1"),
),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS file you can copy-paste into the Swagger Editor.

Updated API Info

Updated API Info

Operation configuration

We do not want to expose endpoints to delete a fridge (seriously, who would ever want that?!). Fortunately, elk lets us configure what endpoints to generate and which to ignore. elks default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell elk to exclude the DELETE operation on the Fridge by using an elk.SchemaAnnotation:

ent/schema/fridge.go
// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
elk.DeletePolicy(elk.Exclude),
}
}

And voilΓ ! the DELETE operation is gone.

DELETE operation is gone

DELETE operation is gone

For more information about how elk's policies work and what you can do with it, have a look at the godoc.

Extend specification

The one thing I should be interested the most in this example is the current contents of a fridge. You can customize the generated OAS to any extend you like by using Hooks. However, this would exceed the scope of this post. An example of how to add an endpoint fridges/{id}/contents to the generated OAS file can be found here.

Generating an OAS-implementing server

I promised you in the beginning we'd create a server behaving as described in the OAS. elk makes this easy, all you have to do is call elk.GenerateHandlers() when you configure the extension:

ent/entc.go
[...]
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
[...]
),
+ elk.GenerateHandlers(),
)
[...]
}

Next, re-run code generation:

go generate ./...

Observe, that a new directory named ent/http was created.

Β» tree ent/http
ent/http
β”œβ”€β”€ create.go
β”œβ”€β”€ delete.go
β”œβ”€β”€ easyjson.go
β”œβ”€β”€ handler.go
β”œβ”€β”€ list.go
β”œβ”€β”€ read.go
β”œβ”€β”€ relations.go
β”œβ”€β”€ request.go
β”œβ”€β”€ response.go
└── update.go

0 directories, 10 files

You can spin-up the generated server with this very simple main.go:

package main

import (
"context"
"log"
"net/http"

"<your-project>/ent"
elk "<your-project>/ent/http"

_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)

func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Start listen to incoming requests.
if err := http.ListenAndServe(":8080", elk.NewHandler(c, zap.NewExample())); err != nil {
log.Fatal(err)
}
}
go run -mod=mod main.go

Our Fridge API server is up and running. With the generated OAS file and the Swagger Tooling you can now generate a client stub in any supported language and forget about writing a RESTful client ever ever again.

Wrapping Up

In this post we introduced a new feature of elk - automatic OpenAPI Specification generation. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Β· 6 min read

A few months ago, Ariel made a silent but highly-impactful contribution to Ent's core, the Extension API. While Ent has had extension capabilities (such as Code-gen Hooks, External Templates, and Annotations) for a long time, there wasn't a convenient way to bundle together all of these moving parts into a coherent, self-contained component. The Extension API which we discuss in the post does exactly that.

Many open-source ecosystems thrive specifically because they excel at providing developers an easy and structured way to extend a small, core system. Much criticism has been made of the Node.js ecosystem (even by its original creator Ryan Dahl) but it is very hard to argue that the ease of publishing and consuming new npm modules facilitated the explosion in its popularity. I've discussed on my personal blog how protoc's plugin system works and how that made the Protobuf ecosystem thrive. In short, ecosystems are only created under modular designs.

In our post today, we will explore Ent's Extension API by building a toy example.

Getting Started

The Extension API only works for projects use Ent's code-generation as a Go package. To set that up, after initializing your project, create a new file named ent/entc.go:

ent/entc.go
//+build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"entgo.io/ent/schema/field"
)

func main() {
err := entc.Generate("./schema", &gen.Config{})
if err != nil {
log.Fatal("running ent codegen:", err)
}
}

Next, modify ent/generate.go to invoke our entc file:

ent/generate.go
package ent

//go:generate go run entc.go

Creating our Extension

All extension's must implement the Extension interface:

type Extension interface {
// Hooks holds an optional list of Hooks to apply
// on the graph before/after the code-generation.
Hooks() []gen.Hook
// Annotations injects global annotations to the gen.Config object that
// can be accessed globally in all templates. Unlike schema annotations,
// being serializable to JSON raw value is not mandatory.
//
// {{- with $.Config.Annotations.GQL }}
// {{/* Annotation usage goes here. */}}
// {{- end }}
//
Annotations() []Annotation
// Templates specifies a list of alternative templates
// to execute or to override the default.
Templates() []*gen.Template
// Options specifies a list of entc.Options to evaluate on
// the gen.Config before executing the code generation.
Options() []Option
}

To simplify the development of new extensions, developers can embed entc.DefaultExtension to create extensions without implementing all methods. In entc.go, add:

ent/entc.go
// ...

// GreetExtension implements entc.Extension.
type GreetExtension {
entc.DefaultExtension
}

Currently, our extension doesn't do anything. Next, let's connect it to our code-generation config. In entc.go, add our new extension to the entc.Generate invocation:

err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(&GreetExtension{})

Adding Templates

External templates can be bundled into extensions to enhance Ent's core code-generation functionality. With our toy example, our goal is to add to each entity a generated method name Greet that returns a greeting with the type's name when invoked. We're aiming for something like:

func (u *User) Greet() string {
return "Greetings, User"
}

To do this, let's add a new external template file and place it in ent/templates/greet.tmpl:

ent/templates/greet.tmpl
{{ define "greet" }}

{{/* Add the base header for the generated file */}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}

{{/* Loop over all nodes and add the Greet method */}}
{{ range $n := $.Nodes }}
{{ $receiver := $n.Receiver }}
func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "Greetings, {{ $n.Name }}"
}
{{ end }}
{{ end }}

Next, let's implement the Templates method:

ent/entc.go
func (*GreetExtension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("greet").ParseFiles("templates/greet.tmpl")),
}
}

Next, let's kick the tires on our extension. Add a new schema for the User type in a file named ent/schema/user.go:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email_address").
Unique(),
}
}

Next, run:

go generate ./...

Observe that a new file, ent/greet.go, was created, it contains:

ent/greet.go
// Code generated by entc, DO NOT EDIT.

package ent

func (u *User) Greet() string {
return "Greetings, User"
}

Great! Our extension was invoked from Ent's code-generation and produced the code we wanted for our schema!

Adding Annotations

Annotations provide a way to supply users of our extension with an API to modify the behavior of code generation logic. To add annotations to our extension, implement the Annotations method. Suppose that for our GreetExtension we want to provide users with the ability to configure the greeting word in the generated code:

// GreetingWord implements entc.Annotation
type GreetingWord string

func (GreetingWord) Name() string {
return "GreetingWord"
}

Next, we add a word field to our GreetExtension struct:

type GreetExtension struct {
entc.DefaultExtension
Word GreetingWord
}

Next, implement the Annotations method:

func (s *GreetExtension) Annotations() []entc.Annotation {
return []entc.Annotation{
s.Word,
}
}

Now, from within your templates you can access the GreetingWord annotation. Modify ent/templates/greet.tmpl to use our new annotation:

func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "{{ $.Annotations.GreetingWord }}, {{ $n.Name }}"
}

Next, modify the code-generation configuration to set the GreetingWord annotation:

"ent/entc.go
err := entc.Generate("./schema",
&gen.Config{},
entc.Extensions(&GreetExtension{
Word: GreetingWord("Shalom"),
}),
)

To see our annotation control the generated code, re-run:

go generate ./...

Finally, observe that the generated ent/greet.go was updated:

func (u *User) Greet() string {
return "Shalom, User"
}

Hooray! We added an option to use an annotation to control the greeting word in the generated Greet method!

More Possibilities

In addition to templates and annotations, the Extension API allows developers to bundle gen.Hooks and entc.Options in extensions to further control the behavior of your code-generation. In this post we will not discuss these possibilities, but if you are interested in using them head over to the documentation.

Wrapping Up

In this post we explored via a toy example how to use the Extension API to create new Ent code-generation extensions. As we've mentioned above, modular design that allows anyone to extend the core functionality of software is critical to the success of any ecosystem. We're seeing this claim start to realize with the Ent community, here's a list of some interesting projects that use the Extension API:

  • elk - an extension to generate REST endpoints from Ent schemas.
  • entgql - generate GraphQL servers from Ent schemas.
  • entviz - generate ER diagrams from Ent schemas.

And what about you? Do you have an idea for a useful Ent extension? I hope this post demonstrated that with the new Extension API, it is not a difficult task.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Β· 3 min read

Dear community,

I’m really happy to share something that has been in the works for quite some time. Yesterday (August 31st), a press release was issued announcing that Ent is joining the Linux Foundation.

Ent was open-sourced while I was working on it with my peers at Facebook in 2019. Since then, our community has grown, and we’ve seen the adoption of Ent explode across many organizations of different sizes and sectors.

Our goal with moving under the governance of the Linux Foundation is to provide a corporate-neutral environment in which organizations can more easily contribute code, as we’ve seen with other successful OSS projects such as Kubernetes and GraphQL. In addition, the move under the governance of the Linux Foundation positions Ent where we would like it to be, a core, infrastructure technology that organizations can trust because it is guaranteed to be here for a long time.

In terms of our community, nothing in particular changes, the repository has already moved to github.com/ent/ent a few months ago, the license remains Apache 2.0, and we are all 100% committed to the success of the project. We’re sure that the Linux Foundation’s strong brand and organizational capabilities will help to build even more confidence in Ent and further foster its adoption in the industry.

I wanted to express my deep gratitude to the amazing folks at Facebook and the Linux Foundation that have worked hard on making this change possible and showing trust in our community to keep pushing the state-of-the-art in data access frameworks. This is a big achievement for our community, and so I want to take a moment to thank all of you for your contributions, support, and trust in this project.

On a personal note, I wanted to share that Rotem (a core contributor to Ent) and I have founded a new company, Ariga. We’re on a mission to build something that we call an β€œoperational data graph” that is heavily built using Ent, we will be sharing more details on that in the near future. You can expect to see many new exciting features contributed to the framework by our team. In addition, Ariga employees will dedicate time and resources to support and foster this wonderful community.

If you have any questions about this change or have any ideas on how to make it even better, please don’t hesitate to reach out to me on our Slack channel.

Ariel ❀️

Β· 5 min read

Joining an existing project with a large codebase can be a daunting task.

Understanding the data model of an application is key for developers to start working on an existing project. One commonly used tool to help overcome this challenge, and enable developers to grasp an application's data model is an ER (Entity Relation) diagram.

ER diagrams provide a visual representation of your data model, and details each field of the entities. Many tools can help create these, where one example is Jetbrains DataGrip, that can generate an ER diagram by connecting to and inspecting an existing database:

Datagrip ER diagram

DataGrip ER diagram example

Ent, a simple, yet powerful entity framework for Go, was originally developed inside Facebook specifically for dealing with projects with large and complex data models. This is why Ent uses code generation - it gives type-safety and code-completion out-of-the-box which helps explain the data model and improves developer velocity. On top of all of this, wouldn't it be great to automatically generate ER diagrams that maintain a high-level view of the data model in a visually appealing representation? (I mean, who doesn't love visualizations?)

Introducing entviz

entviz is an ent extension that automatically generates a static HTML page that visualizes your data graph.

Entviz example output

Entviz example output

Most ER diagram generation tools need to connect to your database and introspect it, which makes it harder to maintain an up-to-date diagram of the database schema. Since entviz integrates directly to your Ent schema, it does not need to connect to your database, and it automatically generates fresh visualization every time you modify your schema.

If you want to know more about how entviz was implemented, checkout the implementation section.

See it in action

First, let's add the entviz extension to our entc.go file:

go get github.com/hedwigz/entviz
info

If you are not familiar with entc you're welcome to read entc documentation to learn more about it.

ent/entc.go
import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/hedwigz/entviz"
)

func main() {
err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(entviz.Extension{}))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Let's say we have a simple schema with a user entity and some fields:

ent/schema/user.go
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.String("email"),
field.Time("created").
Default(time.Now),
}
}

Now, entviz will automatically generate a visualization of our graph everytime we run:

go generate ./...

You should now see a new file called schema-viz.html in your ent directory:

$ ll ./ent/schema-viz.html
-rw-r--r-- 1 hedwigz hedwigz 7.3K Aug 27 09:00 schema-viz.html

Open the html file with your favorite browser to see the visualization

tutorial image

Next, let's add another entity named Post, and see how our visualization changes:

ent init Post
ent/schema/post.go
// Fields of the Post.
func (Post) Fields() []ent.Field {
return []ent.Field{
field.String("content"),
field.Time("created").
Default(time.Now),
}
}

Now we add an (O2M) edge from User to Post:

ent/schema/post.go
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("posts", Post.Type),
}
}

Finally, regenerate the code:

go generate ./...

Refresh your browser to see the updated result!

tutorial image 2

Implementation

Entviz was implemented by extending ent via its extension API. The Ent extension API lets you aggregate multiple templates, hooks, options and annotations. For instance, entviz uses templates to add another go file, entviz.go, which exposes the ServeEntviz method that can be used as an http handler, like so:

func main() {
http.ListenAndServe("localhost:3002", ent.ServeEntviz())
}

We define an extension struct which embeds the default extension, and we export our template via the Templates method:

//go:embed entviz.go.tmpl
var tmplfile string

type Extension struct {
entc.DefaultExtension
}

func (Extension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("entviz").Parse(tmplfile)),
}
}

The template file is the code that we want to generate:

{{ define "entviz"}}

{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
import (
_ "embed"
"net/http"
"strings"
"time"
)

//go:embed schema-viz.html
var html string

func ServeEntviz() http.Handler {
generateTime := time.Now()
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
http.ServeContent(w, req, "schema-viz.html", generateTime, strings.NewReader(html))
})
}
{{ end }}

That's it! now we have a new method in ent package.

Wrapping-Up

We saw how ER diagrams help developers keep track of their data model. Next, we introduced entviz - an Ent extension that automatically generates an ER diagram for Ent schemas. We saw how entviz utilizes Ent's extension API to extend the code generation and add extra functionality. Finally, you got to see it in action by installing and use entviz in your own project. If you like the code and/or want to contribute - feel free to checkout the project on github.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

Β· 10 min read

Observability is a quality of a system that refers to how well its internal state can be measured externally. As a computer program evolves into a full-blown production system this quality becomes increasingly important. One of the ways to make a software system more observable is to export metrics, that is, to report in some externally visible way a quantitative description of the running system's state. For instance, to expose an HTTP endpoint where we can see how many errors occurred since the process has started. In this post, we will explore how to build more observable Ent applications using Prometheus.

What is Ent?

Ent, is a simple, yet powerful entity framework for Go, that makes it easy to build and maintain applications with large data models.

What is Prometheus?

Prometheus is an open source monitoring system developed by engineering at SoundCloud in 2012. It includes an embedded time series database and many integrations to third-party systems. The Prometheus client exposes the process's metrics via an HTTP endpoint (usually /metrics), this endpoint is discovered by the Prometheus scraper which polls the endpoint every interval (typically 30s) and writes it into a time-series database.

Prometheus is just an example of a class of metric collection backends. Many others, such as AWS CloudWatch, InfluxDB and others exist and are in wide use in the industry. Towards the end of this post, we will discuss a possible path to a unified, standards-based integration with any such backend.

Working with Prometheus

To expose an application's metrics using Prometheus, we need to create a Prometheus Collector, a collector collects a set of metrics from your server.

In our example, we will be using two types of metrics that can be stored in a collector: Counters and Histograms. Counters are monotonically increasing cumulative metrics that represent how many times something has happened, commonly used to count the number of requests a server has processed or errors that have occurred. Histograms sample observations into buckets of configurable sizes and are commonly used to represent latency distributions (i.e how many requests returned in under 5ms, 10ms, 100ms, 1s, etc.) In addition, Prometheus allows metrics to be broken down into labels. This is useful for example for counting requests but breaking down the counter by endpoint name.

Let’s see how to create such a collector using the official Go client. To do so, we will use a package in the client called promauto that simplifies the processes of creating collectors. A simple example of a collector that counts (for example, total request or number or request error):

package example

import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)

var (
// List of dynamic labels
labelNames = []string{"endpoint", "error_code"}

// Create a counter collector
exampleCollector = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "endpoint_errors",
Help: "Number of errors in endpoints",
},
labelNames,
)
)

// When using you set the values of the dynamic labels and then increment the counter
func incrementError() {
exampleCollector.WithLabelValues("/create-user", "400").Inc()
}

Ent Hooks

Hooks are a feature of Ent that allows adding custom logic before and after operations that change the data entities.

A mutation is an operation that changes something in the database. There are 5 types of mutations:

  1. Create.
  2. UpdateOne.
  3. Update.
  4. DeleteOne.
  5. Delete.

Hooks are functions that get an ent.Mutator and return a mutator back. They function similar to the popular HTTP middleware pattern.

package example

import (
"context"

"entgo.io/ent"
)

func exampleHook() ent.Hook {
//use this to init your hook
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Do something before mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// Do something if error after mutation.
}
// Do something after mutation.
return v, err
})
}
}

In Ent, there are two types of mutation hooks - schema hooks and runtime hooks. Schema hooks are mainly used for defining custom mutation logic on a specific entity type, for example, syncing entity creation to another system. Runtime hooks, on the other hand, are used to define more global logic for adding things like logging, metrics, tracing, etc.

For our use case, we should definitely use runtime hooks, because to be valuable we want to export metrics on all operations on all entity types:

package example

import (
"entprom/ent"
"entprom/ent/hook"
)

func main() {
client, _ := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")

// Add a hook only on user mutations.
client.User.Use(exampleHook())

// Add a hook only on update operations.
client.Use(hook.On(exampleHook(), ent.OpUpdate|ent.OpUpdateOne))
}

Exporting Prometheus Metrics for an Ent Application

With all of the introductions complete, let’s cut to the chase and show how to use Prometheus and Ent hooks together to create an observable application. Our goal with this example is to export these metrics using a hook:

Metric NameDescription
ent_operation_totalNumber of ent mutation operations
ent_operation_errorNumber of failed ent mutation operations
ent_operation_duration_secondsTime in seconds per operation

Each of these metrics will be broken down by labels into two dimensions:

  • mutation_type: Entity type that is being mutated (User, BlogPost, Account etc.).
  • mutation_op: The operation that is being performed (Create, Delete etc.).

Let’s start by defining our collectors:

//Ent dynamic dimensions
const (
mutationType = "mutation_type"
mutationOp = "mutation_op"
)

var entLabels = []string{mutationType, mutationOp}

// Create a collector for total operations counter
func initOpsProcessedTotal() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_total",
Help: "Number of ent mutation operations",
},
entLabels,
)
}

// Create a collector for error counter
func initOpsProcessedError() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_error",
Help: "Number of failed ent mutation operations",
},
entLabels,
)
}

// Create a collector for duration histogram collector
func initOpsDuration() *prometheus.HistogramVec {
return promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "ent_operation_duration_seconds",
Help: "Time in seconds per operation",
},
entLabels,
)
}

Next, let’s define our new hook:

// Hook init collectors, count total at beginning error on mutation error and duration also after.
func Hook() ent.Hook {
opsProcessedTotal := initOpsProcessedTotal()
opsProcessedError := initOpsProcessedError()
opsDuration := initOpsDuration()
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Before mutation, start measuring time.
start := time.Now()
// Extract dynamic labels from mutation.
labels := prometheus.Labels{mutationType: m.Type(), mutationOp: m.Op().String()}
// Increment total ops counter.
opsProcessedTotal.With(labels).Inc()
// Execute mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// In case of error increment error counter.
opsProcessedError.With(labels).Inc()
}
// Stop time measure.
duration := time.Since(start)
// Record duration in seconds.
opsDuration.With(labels).Observe(duration.Seconds())
return v, err
})
}
}

Connecting the Prometheus Collector to our Service

After defining our hook, let’s see next how to connect it to our application and how to use Prometheus to serve an endpoint that exposes the metrics in our collectors:

package main

import (
"context"
"log"
"net/http"

"entprom"
"entprom/ent"

_ "github.com/mattn/go-sqlite3"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

func createClient() *ent.Client {
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
ctx := context.Background()
// Run the auto migration tool.
if err := c.Schema.Create(ctx); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
return c
}

func handler(client *ent.Client) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
// Run operations.
_, err := client.User.Create().SetName("a8m").Save(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
}
}

func main() {
// Create Ent client and migrate
client := createClient()
// Use the hook
client.Use(entprom.Hook())
// Simple handler to run actions on our DB.
http.HandleFunc("/", handler(client))
// This endpoint sends metrics to the prometheus to collect
http.Handle("/metrics", promhttp.Handler())
log.Println("server starting on port 8080")
// Run the server
log.Fatal(http.ListenAndServe(":8080", nil))
}

After a few times of accessing / on our server (using curl or a browser), go to /metrics. There you will see the output from the Prometheus client:

# HELP ent_operation_duration_seconds Time in seconds per operation
# TYPE ent_operation_duration_seconds histogram
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.005"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.01"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.025"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.05"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.25"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="2.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="10"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="+Inf"} 2
ent_operation_duration_seconds_sum{mutation_op="OpCreate",mutation_type="User"} 0.000265669
ent_operation_duration_seconds_count{mutation_op="OpCreate",mutation_type="User"} 2
# HELP ent_operation_error Number of failed ent mutation operations
# TYPE ent_operation_error counter
ent_operation_error{mutation_op="OpCreate",mutation_type="User"} 1
# HELP ent_operation_total Number of ent mutation operations
# TYPE ent_operation_total counter
ent_operation_total{mutation_op="OpCreate",mutation_type="User"} 2

In the top part, we can see the histogram calculated, it calculates the number of operations in each β€œbucket”. After that, we can see the number of total operations and the number of errors. Each metric is followed by its description that can be seen when querying with Prometheus dashboard.

The Prometheus client is only one component of the Prometheus architecture. To run a complete system including a scraper that will poll your endpoint, a Prometheus that will store your metrics and can answer queries, and a simple UI to interact with it, I recommend reading the official documentation or use the docker-compose.yaml in this example repo.

Future Work on Observability in Ent

As we’ve mentioned above, there is an abundance of metric collections backends available today, Prometheus being just one of many successful projects. While these solutions differ in many dimensions (self-hosted vs SaaS, different storage engines with different query languages, and more) - from the metric reporting client perspective, they are virtually identical.

In cases like these, good software engineering principles suggest that the concrete backend should be abstracted away from the client using an interface. This interface can then be implemented by backends so client applications can easily switch between the different implementations. Such changes are happening in recent years in our industry. Consider, for example, the Open Container Initiative or the Service Mesh Interface: both are initiatives that strive to define a standard interface for a problem space. This interface is supposed to create an ecosystem of implementations of the standard. In the observability space, the exact same convergence is occurring with OpenCensus and OpenTracing currently merging into OpenTelemetry.

As nice as it would be to publish an Ent + Prometheus extension similar to the one presented in this post, we are firm believers that observability should be solved with a standards-based approach. We invite everyone to join the discussion on what is the right way to do this for Ent.

Wrap-Up

We started this post by presenting Prometheus, a popular open-source monitoring solution. Next, we reviewed β€œHooks”, a feature of Ent that allows adding custom logic before and after operations that change the data entities. We then showed how to integrate the two to create observable applications using Ent. Finally, we discussed the future of observability in Ent and invited everyone to join the discussion to shape it.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates: