メインコンテンツへスキップする

· 1 分で読む

In a previous blogpost, we presented to you elk - an extension to Ent enabling you to generate a fully-working Go CRUD HTTP API from your schema. In the today's post I'd like to introduce to you a shiny new feature that recently made it into elk: a fully compliant OpenAPI Specification (OAS) generator.

OAS (formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS file.

Getting Started

The first step is to add the elk package to your project:

go get github.com/masseelch/elk@latest

elk uses the Ent Extension API to integrate with Ent’s code-generation. This requires that we use the entc (ent codegen) package as described here to generate code for our project. Follow the next two steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent

//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS file from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS file

The first step on our way to the OAS file is to create an Ent schema graph:

go run -mod=mod entgo.io/ent/cmd/ent new Fridge Compartment Item

To demonstrate elk's OAS generation capabilities, we will build together an example application. Suppose I have multiple fridges with multiple compartments, and my significant-other and I want to know its contents at all times. To supply ourselves with this incredibly useful information we will create a Go server with a RESTful API. To ease the creation of client applications that can communicate with our server, we will create an OpenAPI Specification file describing its API. Once we have that, we can build a frontend to manage fridges and contents in a language of our choice by using the Swagger Codegen! You can find an example that uses docker to generate a client here.

Let's create our schema:

ent/fridge.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}

// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}
ent/compartment.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}

// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}
ent/item.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}

// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

Now, let's generate the Ent code and the OAS file.

go generate ./...

In addition to the files Ent normally generates, another file named openapi.json has been created. Copy its contents and paste them into the Swagger Editor. You should see three groups: Compartment, Item and Fridge.

Swagger Editor Example

Swagger Editor Example

If you happen to open up the POST operation tab in the Fridge group, you see a description of the expected request data and all the possible responses. Great!

POST operation on Fridge

POST operation on Fridge

Basic Configuration

The description of our API does not yet reflect what it does, let's change that! elk provides easy-to-use configuration builders to manipulate the generated OAS file. Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
"openapi.json",
// It is a Content-Management-System ...
elk.SpecTitle("Fridge CMS"),
// You can use CommonMark syntax (https://commonmark.org/).
elk.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
elk.SpecVersion("0.0.1"),
),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS file you can copy-paste into the Swagger Editor.

Updated API Info

Updated API Info

Operation configuration

We do not want to expose endpoints to delete a fridge (seriously, who would ever want that?!). Fortunately, elk lets us configure what endpoints to generate and which to ignore. elks default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell elk to exclude the DELETE operation on the Fridge by using an elk.SchemaAnnotation:

ent/schema/fridge.go
// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
elk.DeletePolicy(elk.Exclude),
}
}

And voilà! the DELETE operation is gone.

DELETE operation is gone

DELETE operation is gone

For more information about how elk's policies work and what you can do with it, have a look at the godoc.

Extend specification

The one thing I should be interested the most in this example is the current contents of a fridge. You can customize the generated OAS to any extend you like by using Hooks. However, this would exceed the scope of this post. An example of how to add an endpoint fridges/{id}/contents to the generated OAS file can be found here.

Generating an OAS-implementing server

I promised you in the beginning we'd create a server behaving as described in the OAS. elk makes this easy, all you have to do is call elk.GenerateHandlers() when you configure the extension:

ent/entc.go
[...]
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
[...]
),
+ elk.GenerateHandlers(),
)
[...]
}

Next, re-run code generation:

go generate ./...

Observe, that a new directory named ent/http was created.

» tree ent/http
ent/http
├── create.go
├── delete.go
├── easyjson.go
├── handler.go
├── list.go
├── read.go
├── relations.go
├── request.go
├── response.go
└── update.go

0 directories, 10 files

You can spin-up the generated server with this very simple main.go:

package main

import (
"context"
"log"
"net/http"

"<your-project>/ent"
elk "<your-project>/ent/http"

_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)

func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Start listen to incoming requests.
if err := http.ListenAndServe(":8080", elk.NewHandler(c, zap.NewExample())); err != nil {
log.Fatal(err)
}
}
go run -mod=mod main.go

Our Fridge API server is up and running. With the generated OAS file and the Swagger Tooling you can now generate a client stub in any supported language and forget about writing a RESTful client ever ever again.

Wrapping Up

In this post we introduced a new feature of elk - automatic OpenAPI Specification generation. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Discord server or Slack channel.

For more Ent news and updates:

· 1 分で読む

A few months ago, Ariel made a silent but highly-impactful contribution to Ent's core, the Extension API. While Ent has had extension capabilities (such as Code-gen Hooks, External Templates, and Annotations) for a long time, there wasn't a convenient way to bundle together all of these moving parts into a coherent, self-contained component. The Extension API which we discuss in the post does exactly that.

Many open-source ecosystems thrive specifically because they excel at providing developers an easy and structured way to extend a small, core system. Much criticism has been made of the Node.js ecosystem (even by its original creator Ryan Dahl) but it is very hard to argue that the ease of publishing and consuming new npm modules facilitated the explosion in its popularity. I've discussed on my personal blog how protoc's plugin system works and how that made the Protobuf ecosystem thrive. In short, ecosystems are only created under modular designs.

In our post today, we will explore Ent's Extension API by building a toy example.

Getting Started

The Extension API only works for projects use Ent's code-generation as a Go package. To set that up, after initializing your project, create a new file named ent/entc.go:

ent/entc.go
//+build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"entgo.io/ent/schema/field"
)

func main() {
err := entc.Generate("./schema", &gen.Config{})
if err != nil {
log.Fatal("running ent codegen:", err)
}
}

Next, modify ent/generate.go to invoke our entc file:

ent/generate.go
package ent

//go:generate go run entc.go

Creating our Extension

All extension's must implement the Extension interface:

type Extension interface {
// Hooks holds an optional list of Hooks to apply
// on the graph before/after the code-generation.
Hooks() []gen.Hook
// Annotations injects global annotations to the gen.Config object that
// can be accessed globally in all templates. Unlike schema annotations,
// being serializable to JSON raw value is not mandatory.
//
// {{- with $.Config.Annotations.GQL }}
// {{/* Annotation usage goes here. */}}
// {{- end }}
//
Annotations() []Annotation
// Templates specifies a list of alternative templates
// to execute or to override the default.
Templates() []*gen.Template
// Options specifies a list of entc.Options to evaluate on
// the gen.Config before executing the code generation.
Options() []Option
}

To simplify the development of new extensions, developers can embed entc.DefaultExtension to create extensions without implementing all methods. In entc.go, add:

ent/entc.go
// ...

// GreetExtension implements entc.Extension.
type GreetExtension {
entc.DefaultExtension
}

Currently, our extension doesn't do anything. Next, let's connect it to our code-generation config. In entc.go, add our new extension to the entc.Generate invocation:

err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(&GreetExtension{})

Adding Templates

External templates can be bundled into extensions to enhance Ent's core code-generation functionality. With our toy example, our goal is to add to each entity a generated method name Greet that returns a greeting with the type's name when invoked. We're aiming for something like:

func (u *User) Greet() string {
return "Greetings, User"
}

To do this, let's add a new external template file and place it in ent/templates/greet.tmpl:

ent/templates/greet.tmpl
{{ define "greet" }}

{{/* Add the base header for the generated file */}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}

{{/* Loop over all nodes and add the Greet method */}}
{{ range $n := $.Nodes }}
{{ $receiver := $n.Receiver }}
func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "Greetings, {{ $n.Name }}"
}
{{ end }}
{{ end }}

Next, let's implement the Templates method:

ent/entc.go
func (*GreetExtension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("greet").ParseFiles("templates/greet.tmpl")),
}
}

Next, let's kick the tires on our extension. Add a new schema for the User type in a file named ent/schema/user.go:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email_address").
Unique(),
}
}

Next, run:

go generate ./...

Observe that a new file, ent/greet.go, was created, it contains:

ent/greet.go
// Code generated by ent, DO NOT EDIT.

package ent

func (u *User) Greet() string {
return "Greetings, User"
}

Great! Our extension was invoked from Ent's code-generation and produced the code we wanted for our schema!

Adding Annotations

Annotations provide a way to supply users of our extension with an API to modify the behavior of code generation logic. To add annotations to our extension, implement the Annotations method. Suppose that for our GreetExtension we want to provide users with the ability to configure the greeting word in the generated code:

// GreetingWord implements entc.Annotation
type GreetingWord string

func (GreetingWord) Name() string {
return "GreetingWord"
}

Next, we add a word field to our GreetExtension struct:

type GreetExtension struct {
entc.DefaultExtension
Word GreetingWord
}

Next, implement the Annotations method:

func (s *GreetExtension) Annotations() []entc.Annotation {
return []entc.Annotation{
s.Word,
}
}

Now, from within your templates you can access the GreetingWord annotation. Modify ent/templates/greet.tmpl to use our new annotation:

func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "{{ $.Annotations.GreetingWord }}, {{ $n.Name }}"
}

Next, modify the code-generation configuration to set the GreetingWord annotation:

"ent/entc.go
err := entc.Generate("./schema",
&gen.Config{},
entc.Extensions(&GreetExtension{
Word: GreetingWord("Shalom"),
}),
)

To see our annotation control the generated code, re-run:

go generate ./...

Finally, observe that the generated ent/greet.go was updated:

func (u *User) Greet() string {
return "Shalom, User"
}

Hooray! We added an option to use an annotation to control the greeting word in the generated Greet method!

More Possibilities

In addition to templates and annotations, the Extension API allows developers to bundle gen.Hooks and entc.Options in extensions to further control the behavior of your code-generation. In this post we will not discuss these possibilities, but if you are interested in using them head over to the documentation.

Wrapping Up

In this post we explored via a toy example how to use the Extension API to create new Ent code-generation extensions. As we've mentioned above, modular design that allows anyone to extend the core functionality of software is critical to the success of any ecosystem. We're seeing this claim start to realize with the Ent community, here's a list of some interesting projects that use the Extension API:

  • elk - an extension to generate REST endpoints from Ent schemas.
  • entgql - generate GraphQL servers from Ent schemas.
  • entviz - generate ER diagrams from Ent schemas.

And what about you? Do you have an idea for a useful Ent extension? I hope this post demonstrated that with the new Extension API, it is not a difficult task.

Have questions? Need help with getting started? Feel free to join our Discord server or Slack channel.

For more Ent news and updates:

· 1 分で読む

Dear community,

I’m really happy to share something that has been in the works for quite some time. Yesterday (August 31st), a press release was issued announcing that Ent is joining the Linux Foundation.

Ent was open-sourced while I was working on it with my peers at Facebook in 2019. Since then, our community has grown, and we’ve seen the adoption of Ent explode across many organizations of different sizes and sectors.

Our goal with moving under the governance of the Linux Foundation is to provide a corporate-neutral environment in which organizations can more easily contribute code, as we’ve seen with other successful OSS projects such as Kubernetes and GraphQL. In addition, the move under the governance of the Linux Foundation positions Ent where we would like it to be, a core, infrastructure technology that organizations can trust because it is guaranteed to be here for a long time.

In terms of our community, nothing in particular changes, the repository has already moved to github.com/ent/ent a few months ago, the license remains Apache 2.0, and we are all 100% committed to the success of the project. We’re sure that the Linux Foundation’s strong brand and organizational capabilities will help to build even more confidence in Ent and further foster its adoption in the industry.

I wanted to express my deep gratitude to the amazing folks at Facebook and the Linux Foundation that have worked hard on making this change possible and showing trust in our community to keep pushing the state-of-the-art in data access frameworks. This is a big achievement for our community, and so I want to take a moment to thank all of you for your contributions, support, and trust in this project.

On a personal note, I wanted to share that Rotem (a core contributor to Ent) and I have founded a new company, Ariga. We’re on a mission to build something that we call an “operational data graph” that is heavily built using Ent, we will be sharing more details on that in the near future. You can expect to see many new exciting features contributed to the framework by our team. In addition, Ariga employees will dedicate time and resources to support and foster this wonderful community.

If you have any questions about this change or have any ideas on how to make it even better, please don’t hesitate to reach out to me on our Discord server or Slack channel.

Ariel ❤️

· 1 分で読む

大規模なコードベースを持つ既存のプロジェクトに参加することは、困難な作業です。

アプリケーションのデータモデルを理解することは、開発者が既存のプロジェクトに取り組む際の鍵となります。 ER (Entity Relation) 図は、この課題を克服し、開発者がアプリケーションのデータモデルを把握するためによく使われるツールです。

ER図は、データモデルを視覚的に表現するもので、エンティティの各フィールドを詳細に示しています。 多くのツールがこれらを作成するのに役立ちます。たとえばJetbrain DataGripです。 既存のデータベースに接続して検査することでER図を生成きます。

Datagrip ER diagram

DataGrip ER 図の例

Entは、シンプルで強力なGoのためのエンティティフレームワークで、元々は大規模で複雑なデータモデルのプロジェクトに対応するために、Facebook社内で開発されました。 Entがコード生成を使用するのはこのためです。型安全性とコード補完をすぐに行うことで、データモデルの説明を助け、開発者の速度を向上させます。 さらに、データモデルのハイレベルなビューを維持するER図を、視覚的に魅力的な表現で自動的に生成できたら素晴らしいと思いませんか? (視覚的な表現が嫌いな人はいないでしょう)

entvizの導入

entviz は、データグラフを視覚化する静的な HTML ページを自動的に生成するent拡張です。

Entviz example output

Entviz 出力サンプル

ほとんどのERダイアグラム生成ツールは、データベースに接続し、それをイントロスペクトする必要があります。 データベーススキーマの最新のダイアグラムを維持するのが難しくなります entviz は Entスキーマに直接統合されているため、データベースに接続する必要はありません。 スキーマを変更するたびに自動的に新しい視覚化が生成されます

entviz がどのように実装されたかについて詳しく知りたい場合は、 内部実装 をご覧ください。

挙動を確認する

まず、entviz 拡張を entc.go ファイルに追加しましょう。

go get github.com/hedwigz/entviz
entc に詳しくない場合は、 entc documentation を読んでください。 :::
ent/entc.go
import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/hedwigz/entviz"
)

func main() {
err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(entviz.Extension{}))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

ユーザーエンティティといくつかのフィールドを持つ単純なスキーマがあるとします。

ent/schema/user.go
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.String("email"),
field.Time("created").
Default(time.Now),
}
}

これにより、entviz は実行するたびに自動的にグラフの視覚化を生成します。

go generate ./...

entディレクトリに schema-viz.html という新しいファイルが表示されるはずです:

$ ll ./ent/schema-viz.html
-rw-r--r-- 1 hedwigz hedwigz 7.3K Aug 27 09:00 schema-viz.html

HTMLファイルをお気に入りのブラウザで開き、表示を確認します

tutorial image

次に、Postという名前の別のエンティティを追加し、可視化がどのように変化するかを見てみましょう。

ent new Post
ent/schema/post.go
// Fields of the Post.
func (Post) Fields() []ent.Field {
return []ent.Field{
field.String("content"),
field.Time("created").
Default(time.Now),
}
}

次に、User から Post に (O2M)エッジを追加します。

ent/schema/post.go
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("posts", Post.Type),
}
}

最後に、コードを再生成します。

go generate ./...

更新された結果を確認するにはブラウザを更新してください!

tutorial image 2

内部実装

Entvizはextension APIを用いてentを拡張することで実装されました。 Entのextension APIを使用すると、 templateshooksoptionsannotations.をまとめ上げることができます たとえば、entvizではテンプレートを使って、entviz.goという別のgoファイルを追加しています。このentviz.goはServeEntvizメソッドを公開しており、httpハンドラとして次のように使用できます。

func main() {
http.ListenAndServe("localhost:3002", ent.ServeEntviz())
}

デフォルトのextensionを埋め込むextension構造体を定義し、 Template メソッドを使用してテンプレートをエクスポートします。

//go:embed entviz.go.tmpl
var tmplfile string

type Extension struct {
entc.DefaultExtension
}

func (Extension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("entviz").Parse(tmplfile)),
}
}

テンプレートファイルは、生成するコードです。

{{ define "entviz"}}

{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}
import (
_ "embed"
"net/http"
"strings"
"time"
)

//go:embed schema-viz.html
var html string

func ServeEntviz() http.Handler {
generateTime := time.Now()
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
http.ServeContent(w, req, "schema-viz.html", generateTime, strings.NewReader(html))
})
}
{{ end }}

以上です! entパッケージに新しいメソッドがあります

まとめ

ER図は、開発者がデータモデルを把握するのに役立つことがわかりました。 次に、EntスキーマのER図を自動生成するEnt拡張機能であるentvizを紹介しました。 entvizがEntの拡張APIを利用してコード生成を拡張し、追加機能を追加する方法を見ました。 最後に、自分のプロジェクトにentvizをインストールして使用することで、実際に動作を確認しました。 コードが気に入ったら、そして貢献したいと思ったら、githubでプロジェクトをチェックアウトすることができます。

さらにご質問がありますか? 始めるにあたって助けが必要ですか? Feel free to join our Discord server or Slack channel.

:::

· 1 分で読む

Observability is a quality of a system that refers to how well its internal state can be measured externally. As a computer program evolves into a full-blown production system this quality becomes increasingly important. One of the ways to make a software system more observable is to export metrics, that is, to report in some externally visible way a quantitative description of the running system's state. For instance, to expose an HTTP endpoint where we can see how many errors occurred since the process has started. In this post, we will explore how to build more observable Ent applications using Prometheus.

What is Ent?

Ent, is a simple, yet powerful entity framework for Go, that makes it easy to build and maintain applications with large data models.

What is Prometheus?

Prometheus is an open source monitoring system developed by engineering at SoundCloud in 2012. It includes an embedded time series database and many integrations to third-party systems. The Prometheus client exposes the process's metrics via an HTTP endpoint (usually /metrics), this endpoint is discovered by the Prometheus scraper which polls the endpoint every interval (typically 30s) and writes it into a time-series database.

Prometheus is just an example of a class of metric collection backends. Many others, such as AWS CloudWatch, InfluxDB and others exist and are in wide use in the industry. Towards the end of this post, we will discuss a possible path to a unified, standards-based integration with any such backend.

Working with Prometheus

To expose an application's metrics using Prometheus, we need to create a Prometheus Collector, a collector collects a set of metrics from your server.

In our example, we will be using two types of metrics that can be stored in a collector: Counters and Histograms. Counters are monotonically increasing cumulative metrics that represent how many times something has happened, commonly used to count the number of requests a server has processed or errors that have occurred. Histograms sample observations into buckets of configurable sizes and are commonly used to represent latency distributions (i.e how many requests returned in under 5ms, 10ms, 100ms, 1s, etc.) In addition, Prometheus allows metrics to be broken down into labels. This is useful for example for counting requests but breaking down the counter by endpoint name.

Let’s see how to create such a collector using the official Go client. To do so, we will use a package in the client called promauto that simplifies the processes of creating collectors. A simple example of a collector that counts (for example, total request or number or request error):

package example

import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)

var (
// List of dynamic labels
labelNames = []string{"endpoint", "error_code"}

// Create a counter collector
exampleCollector = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "endpoint_errors",
Help: "Number of errors in endpoints",
},
labelNames,
)
)

// When using you set the values of the dynamic labels and then increment the counter
func incrementError() {
exampleCollector.WithLabelValues("/create-user", "400").Inc()
}

Ent Hooks

Hooks are a feature of Ent that allows adding custom logic before and after operations that change the data entities.

A mutation is an operation that changes something in the database. There are 5 types of mutations:

  1. Create.
  2. UpdateOne.
  3. Update.
  4. DeleteOne.
  5. Delete.

Hooks are functions that get an ent.Mutator and return a mutator back. They function similar to the popular HTTP middleware pattern.

package example

import (
"context"

"entgo.io/ent"
)

func exampleHook() ent.Hook {
//use this to init your hook
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Do something before mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// Do something if error after mutation.
}
// Do something after mutation.
return v, err
})
}
}

In Ent, there are two types of mutation hooks - schema hooks and runtime hooks. Schema hooks are mainly used for defining custom mutation logic on a specific entity type, for example, syncing entity creation to another system. Runtime hooks, on the other hand, are used to define more global logic for adding things like logging, metrics, tracing, etc.

For our use case, we should definitely use runtime hooks, because to be valuable we want to export metrics on all operations on all entity types:

package example

import (
"entprom/ent"
"entprom/ent/hook"
)

func main() {
client, _ := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")

// Add a hook only on user mutations.
client.User.Use(exampleHook())

// Add a hook only on update operations.
client.Use(hook.On(exampleHook(), ent.OpUpdate|ent.OpUpdateOne))
}

Exporting Prometheus Metrics for an Ent Application

With all of the introductions complete, let’s cut to the chase and show how to use Prometheus and Ent hooks together to create an observable application. Our goal with this example is to export these metrics using a hook:

Metric NameDescription
ent_operation_totalNumber of ent mutation operations
ent_operation_errorNumber of failed ent mutation operations
ent_operation_duration_secondsTime in seconds per operation

Each of these metrics will be broken down by labels into two dimensions:

  • mutation_type: Entity type that is being mutated (User, BlogPost, Account etc.).
  • mutation_op: The operation that is being performed (Create, Delete etc.).

Let’s start by defining our collectors:

//Ent dynamic dimensions
const (
mutationType = "mutation_type"
mutationOp = "mutation_op"
)

var entLabels = []string{mutationType, mutationOp}

// Create a collector for total operations counter
func initOpsProcessedTotal() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_total",
Help: "Number of ent mutation operations",
},
entLabels,
)
}

// Create a collector for error counter
func initOpsProcessedError() *prometheus.CounterVec {
return promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "ent_operation_error",
Help: "Number of failed ent mutation operations",
},
entLabels,
)
}

// Create a collector for duration histogram collector
func initOpsDuration() *prometheus.HistogramVec {
return promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "ent_operation_duration_seconds",
Help: "Time in seconds per operation",
},
entLabels,
)
}

Next, let’s define our new hook:

// Hook init collectors, count total at beginning error on mutation error and duration also after.
func Hook() ent.Hook {
opsProcessedTotal := initOpsProcessedTotal()
opsProcessedError := initOpsProcessedError()
opsDuration := initOpsDuration()
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
// Before mutation, start measuring time.
start := time.Now()
// Extract dynamic labels from mutation.
labels := prometheus.Labels{mutationType: m.Type(), mutationOp: m.Op().String()}
// Increment total ops counter.
opsProcessedTotal.With(labels).Inc()
// Execute mutation.
v, err := next.Mutate(ctx, m)
if err != nil {
// In case of error increment error counter.
opsProcessedError.With(labels).Inc()
}
// Stop time measure.
duration := time.Since(start)
// Record duration in seconds.
opsDuration.With(labels).Observe(duration.Seconds())
return v, err
})
}
}

Connecting the Prometheus Collector to our Service

After defining our hook, let’s see next how to connect it to our application and how to use Prometheus to serve an endpoint that exposes the metrics in our collectors:

package main

import (
"context"
"log"
"net/http"

"entprom"
"entprom/ent"

_ "github.com/mattn/go-sqlite3"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

func createClient() *ent.Client {
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
ctx := context.Background()
// Run the auto migration tool.
if err := c.Schema.Create(ctx); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
return c
}

func handler(client *ent.Client) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
// Run operations.
_, err := client.User.Create().SetName("a8m").Save(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
}
}

func main() {
// Create Ent client and migrate
client := createClient()
// Use the hook
client.Use(entprom.Hook())
// Simple handler to run actions on our DB.
http.HandleFunc("/", handler(client))
// This endpoint sends metrics to the prometheus to collect
http.Handle("/metrics", promhttp.Handler())
log.Println("server starting on port 8080")
// Run the server
log.Fatal(http.ListenAndServe(":8080", nil))
}

After a few times of accessing / on our server (using curl or a browser), go to /metrics. There you will see the output from the Prometheus client:

# HELP ent_operation_duration_seconds Time in seconds per operation
# TYPE ent_operation_duration_seconds histogram
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.005"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.01"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.025"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.05"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.25"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="0.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="1"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="2.5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="5"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="10"} 2
ent_operation_duration_seconds_bucket{mutation_op="OpCreate",mutation_type="User",le="+Inf"} 2
ent_operation_duration_seconds_sum{mutation_op="OpCreate",mutation_type="User"} 0.000265669
ent_operation_duration_seconds_count{mutation_op="OpCreate",mutation_type="User"} 2
# HELP ent_operation_error Number of failed ent mutation operations
# TYPE ent_operation_error counter
ent_operation_error{mutation_op="OpCreate",mutation_type="User"} 1
# HELP ent_operation_total Number of ent mutation operations
# TYPE ent_operation_total counter
ent_operation_total{mutation_op="OpCreate",mutation_type="User"} 2

In the top part, we can see the histogram calculated, it calculates the number of operations in each “bucket”. After that, we can see the number of total operations and the number of errors. Each metric is followed by its description that can be seen when querying with Prometheus dashboard.

The Prometheus client is only one component of the Prometheus architecture. To run a complete system including a scraper that will poll your endpoint, a Prometheus that will store your metrics and can answer queries, and a simple UI to interact with it, I recommend reading the official documentation or use the docker-compose.yaml in this example repo.

Future Work on Observability in Ent

As we’ve mentioned above, there is an abundance of metric collections backends available today, Prometheus being just one of many successful projects. While these solutions differ in many dimensions (self-hosted vs SaaS, different storage engines with different query languages, and more) - from the metric reporting client perspective, they are virtually identical.

In cases like these, good software engineering principles suggest that the concrete backend should be abstracted away from the client using an interface. This interface can then be implemented by backends so client applications can easily switch between the different implementations. Such changes are happening in recent years in our industry. Consider, for example, the Open Container Initiative or the Service Mesh Interface: both are initiatives that strive to define a standard interface for a problem space. This interface is supposed to create an ecosystem of implementations of the standard. In the observability space, the exact same convergence is occurring with OpenCensus and OpenTracing currently merging into OpenTelemetry.

As nice as it would be to publish an Ent + Prometheus extension similar to the one presented in this post, we are firm believers that observability should be solved with a standards-based approach. We invite everyone to join the discussion on what is the right way to do this for Ent.

Wrap-Up

We started this post by presenting Prometheus, a popular open-source monitoring solution. Next, we reviewed “Hooks”, a feature of Ent that allows adding custom logic before and after operations that change the data entities. We then showed how to integrate the two to create observable applications using Ent. Finally, we discussed the future of observability in Ent and invited everyone to join the discussion to shape it.

Have questions? Need help with getting started? Feel free to join our Discord server or Slack channel.

For more Ent news and updates:

· 1 分で読む

前回のリリースから約4ヶ月空いてしまったのは、正当な理由があります。 本日リリースされたバージョン0.9.0には、待望の機能がいくつか搭載されています。 その中でも、1年半以上前から議論されてきた機能であり、Entのユーザーアンケートでも最も要望の多かった機能の一つである「Upsert API」は、その筆頭に挙げられるでしょう!

バージョン0.9.0では、新しい機能フラグsql/upsertを使用して「Upsert」スタイルのステートメントをサポートしています。 Entには 機能フラグ群があり、これをオンにすることで Entによって生成されたコードにさらに機能を追加できます。 機能フラグは、必ずしもすべてのプロジェクトで必要とされているわけではない機能へのオプトインを可能にするメカニズムとして、また、いつかEntのコアの一部になるかもしれない機能の実験を行うための方法として使用されています。

この記事では、この新機能と、それが役立つ場所を紹介し、その使用方法を実演します。

Upsert

「Upsert」とは、データシステムでよく使われる用語で、「update」と「insert」を組み合わせたものです。通常、テーブルにレコードを挿入しようとし、一意性制約に違反した場合(たとえば、そのIDのレコードが既に存在する場合) 、代わりにそのレコードが更新されるステートメントを指します。 一般的なリレーショナルデータベースでは、特定のUPSERT文はありませんが、ほとんどのデータベースがこの種の動作を実現する方法をサポートしています。

たとえば、SQLite データベースにこのような定義のテーブルがあるとします:

CREATE TABLE users (
id integer PRIMARY KEY AUTOINCREMENT,
email varchar(255) UNIQUE,
name varchar(255)
)

同じインサートを2回実行しようとすると、

INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');
INSERT INTO users (email, name) VALUES ('rotem@entgo.io', 'Rotem Tamir');

次のエラーが表示されます:

[2021-08-05 06:49:22] UNIQUE constraint failed: users.email

多くの場合、書き込み操作は、システムを同じ状態にしたまま何度も連続して実行することができるという、べき等性(idempotent) があると便利です。

また、レコードを作成しようとする前に、そのレコードが存在するかどうかを問い合わせるのは好ましくない場合もあります。 このような状況のために、SQLite は INSERT文の ON CONFLICTをサポートしています。 既存の値を新しい値で上書きするようにSQLiteに指示するには、次のように実行します。

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem')
ON CONFLICT (email) DO UPDATE SET email=excluded.email, name=excluded.name;

既存の値を保持したい場合は、 DO NOTHING のコンフリクトアクションを使用できます。

INSERT INTO users (email, name) values ('rotem@entgo.io', 'Tamir, Rotem') 
ON CONFLICT DO NOTHING;

2つのバージョンを何らかの方法で統合したい場合は、DO UPDATEアクションを少し違った方法で使用して、次のようにします:

INSERT INTO users (email, full_name) values ('rotem@entgo.io', 'Tamir, Rotem') 
ON CONFLICT (email) DO UPDATE SET name=excluded.name || ' (formerly: ' || users.name || ')'

この場合、2回目のINSERTの後、nameカラムの値は次のようになります。Tamir, Rotem (formerly: Rotem Tamir)となります。 あまり便利ではありませんが、この方法でクールなことができることがおわかりいただけると思います。

Ent で Upsert

前述のusersテーブルと同様のエンティティを持つ既存のEntプロジェクトがあるとします

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email").
Unique(),
field.String("name"),
}
}

Upsert API は新しくリリースされた機能であるため、以下を使用して ent のバージョンを更新してください。

go get -u entgo.io/ent@v0.9.0

次に、 ent/generate.go のコード生成フラグに sql/upsert 機能フラグを追加します。

package ent

//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature sql/upsert ./schema

次に、プロジェクトのコード生成を再実行します。

go generate ./...

OnConflict という新しいメソッドが ent/user_create.go ファイルに追加されたことを確認します:

// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
// of the `INSERT` statement. For example:
//
// client.User.Create().
// SetEmailAddress(v).
// OnConflict(
// // Update the row with the new values
// // the was proposed for insertion.
// sql.ResolveWithNewValues(),
// ).
// // Override some of the fields with custom
// // update values.
// Update(func(u *ent.UserUpsert) {
// SetEmailAddress(v+v)
// }).
// Exec(ctx)
//
func (uc *UserCreate) OnConflict(opts ...sql.ConflictOption) *UserUpsertOne {
uc.conflict = opts
return &UserUpsertOne{
create: uc,
}
}

このコード(および生成された新しいコード) は、UserエンティティのUpsert動作を実現するのに役立ちます。 これを調べるために、まず一意性制約のエラーを再現するテストを書いてみましょう:

func TestUniqueConstraintFails(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()

// 最初にユーザーを作成します
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)

// 次に、同じメールアドレスを持ったユーザーを作成してみます
_, err := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
Save(ctx)

if !ent.IsConstraintError(err) {
log.Fatalf("expected second created to fail with constraint error")
}
log.Printf("second query failed with: %v", err)
}

テストはpassしました:

=== RUN   TestUniqueConstraintFails
2021/08/05 07:12:11 second query failed with: ent: constraint failed: insert node to table "users": UNIQUE constraint failed: users.email
--- PASS: TestUniqueConstraintFails (0.00s)

次に、コンフリクトが発生した場合に、Ent に既存の値を上書きするよう指示する方法を見てみましょう。

func TestUpsertReplace(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()

// 最初にユーザーを作成します
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)

// 次に、同じメールアドレスを持ったユーザーを作成してみます
// 今回は、ON CONFLICT 時の動作として、
// `UpdateNewValues` modifierを指定します
newID := client.User.Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
UpdateNewValues().
// IDXメソッドを使用して、作成/更新されたエンティティの
// IDを受け取ることができます
IDX(ctx)

// 最初に作成されたユーザーのIDと、今更新されたユーザーの
// IDが同じであることを期待しています。
if orig.ID != newID {
log.Fatalf("expected upsert to update an existing record")
}

current := client.User.GetX(ctx, orig.ID)
if current.Name != "Tamir, Rotem" {
log.Fatalf("expected upsert to replace with the new values")
}
}

テストの実行:

=== RUN   TestUpsertReplace
--- PASS: TestUpsertReplace (0.00s)

代わりに、 Ignore modifier を使用して、Ent がコンフリクトを解決する際に古いバージョンを維持するように指示することもできます。 次のようなテストを書いてみましょう:

func TestUpsertIgnore(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.TODO()

// 最初にユーザーを作成します
orig := client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Rotem Tamir").
SaveX(ctx)

// 次に、同じメールアドレスを持ったユーザーを作成してみます
// 今回は、ON CONFLICT 時の動作として、
// `Ignore` modifierを指定します
client.User.
Create().
SetEmail("rotem@entgo.io").
SetName("Tamir, Rotem").
OnConflict().
Ignore().
ExecX(ctx)

current := client.User.GetX(ctx, orig.ID)
if current.FullName != orig.FullName {
log.Fatalf("expected upsert to keep the original version")
}
}

この機能の詳細については、 Feature Flag または Upsert API のドキュメントを参照してください。

まとめ

この記事では、Ent v0.9.0で機能フラグの指定によって利用可能になった、待望の機能であるUpsert APIを紹介しました。 Upsertがアプリケーションでよく使われる場面と、一般的なRDBMSで実装する方法について説明しました。 最後に、Entを使ってUpsert APIを使い始めるための簡単な例を示しました。

さらにご質問がありますか? 始めるにあたって助けが必要ですか? Feel free to join our Discord server or Slack channel.

より多くのEntのニュースと最新情報をお届けします

· 1 分で読む

Entのコアな原則の1つである「Schema as Code」は、「エンティティとそのエッジを定義するEntのDSLは、Goのコードを使う」という以上の意味を持っています。 他の多くのORMと比較した場合、Entのユニークなアプローチは、エンティティに関連するすべてのロジックを、コードとして、直接、スキーマ定義で表現できます。

Entでは、開発者はすべてのauthorization ロジック(EntではPrivacy と呼びます) とすべてのの変更の副作用(EntではHooksと呼びます) をスキーマに直接書き込むことができます。 すべてを同じ場所に置くことができるだけでも非常に便利ですが、その真の力はコード生成と組み合わせたときに発揮されます。

スキーマがこのように定義されていれば、完全に機能するプロダクショングレードのサーバーコードを自動的に生成することが可能になります。 権限決定やカスタム副作用の責任をRPC層からデータ層に移すと、基本的なCRUD(Create、Read、Update、Delete) エンドポイントの実装は、機械的に生成できる程度に汎用的になります。 これこそが、人気のGraphQLやgRPCのEnt拡張機能の背景にある考え方です。

今日は、Entスキーマから十分に機能するRESTfulなAPIエンドポイントを自動的に生成する、elkという新しいEnt拡張をご紹介します。 elkは、グラフに追加するすべてのエンティティの基本的なCRUDエンドポイントを設定するための面倒な作業をすべて自動化することを目指しています。リフレクションを排除し、型安全性を維持しながら、ロギング、リクエストボディの検証、リレーションのイーガーローディング、シリアライズなどを行います。

さあ、始めましょう!

はじめに

このコードの完成系は GitHubにあります

新しいGoプロジェクトを作成することから始めましょう:

mkdir elk-example
cd elk-example
go mod init elk-example

Entのコードジェネレータを呼び出し、2つのスキーマを作成します:User, Pet:

go run -mod=mod entgo.io/ent/cmd/ent new Pet User

プロジェクトはこのようになります:

.
├── ent
│ ├── generate.go
│ └── schema
│ ├── pet.go
│ └── user.go
├── go.mod
└── go.sum

次に、elkパッケージをプロジェクトに追加してみましょう。

go get -u github.com/masseelch/elk

elkは、Entのコード生成と統合するために、Entのextension APIを使用しています。 ここで説明されている通りに、entc(ent codegen) パッケージを使用する必要があります。 次の3つのステップに従って、このパッケージを有効にし、Entがelk拡張機能と連携できるように設定してください:

1. ent/entc.goという名前の新しいGoファイルを作成し、以下の内容を貼り付けます。

// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
elk.GenerateHandlers(),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. ent/generate.goファイルを編集して、ent/entc.goファイルを実行します。

package ent

//go:generate go run -mod=mod entc.go

3/. elk は生成するコードにいくつかの外部パッケージを使用します。 現在、elk のセットアップ時に一度、手動でそれらのパッケージを取得する必要があります。

go get github.com/mailru/easyjson github.com/masseelch/render github.com/go-chi/chi/v5 go.uber.org/zap

これらのステップが完了すると、elkを搭載したentを使用するためのすべてのセットアップが完了します。 Entの詳細、さまざまなデータベースへの接続方法、マイグレーションの実行、エンティティの操作については、セットアップチュートリアルをご覧ください。

elk で HTTP CRUD ハンドラを生成する

完全に動作する HTTP ハンドラを生成するには、最初に Ent スキーマ定義を作成する必要があります。 ent/schema/pet.goを開いて、編集しましょう:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

// Pet holds the schema definition for the Pet entity.
type Pet struct {
ent.Schema
}

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}

Petのエンティティにnameageの2つのフィールドを追加しました。 ent.Schemaでは、エンティティのフィールドを定義するだけです。 このスキーマから実行可能なコードを生成するには、次のコマンドを実行します:

go generate ./...

Entがいつも生成するファイルに加えて、ent/httpという別のディレクトリが作成されていることに注目してください。 これらのファイルはelk拡張機能によって生成されたもので、生成された HTTP ハンドラのコードが含まれています。 例えば、Pet エンティティに対する読み取り操作のために生成されたコードの一部を以下に示します:

const (
PetCreate Routes = 1 << iota
PetRead
PetUpdate
PetDelete
PetList
PetRoutes = 1<<iota - 1
)

// PetHandler handles http crud operations on ent.Pet.
type PetHandler struct {
handler

client *ent.Client
log *zap.Logger
}

func NewPetHandler(c *ent.Client, l *zap.Logger) *PetHandler {
return &PetHandler{
client: c,
log: l.With(zap.String("handler", "PetHandler")),
}
}

// Read fetches the ent.Pet identified by a given url-parameter from the
// database and renders it to the client.
func (h *PetHandler) Read(w http.ResponseWriter, r *http.Request) {
l := h.log.With(zap.String("method", "Read"))
// ID is URL parameter.
id, err := strconv.Atoi(chi.URLParam(r, "id"))
if err != nil {
l.Error("error getting id from url parameter", zap.String("id", chi.URLParam(r, "id")), zap.Error(err))
render.BadRequest(w, r, "id must be an integer greater zero")
return
}
// Create the query to fetch the Pet
q := h.client.Pet.Query().Where(pet.ID(id))
e, err := q.Only(r.Context())
if err != nil {
switch {
case ent.IsNotFound(err):
msg := stripEntError(err)
l.Info(msg, zap.Error(err), zap.Int("id", id))
render.NotFound(w, r, msg)
case ent.IsNotSingular(err):
msg := stripEntError(err)
l.Error(msg, zap.Error(err), zap.Int("id", id))
render.BadRequest(w, r, msg)
default:
l.Error("could not read pet", zap.Error(err), zap.Int("id", id))
render.InternalServerError(w, r, nil)
}
return
}
l.Info("pet rendered", zap.Int("id", id))
easyjson.MarshalToHTTPResponseWriter(NewPet2657988899View(e), w)
}

次に、Petエンティティを管理するためのRESTful HTTP サーバーを作成する方法を見てみましょう。 ent/main.goという名前の新しいGoファイルを作成し、以下の内容を加えます:

package main

import (
"context"
"fmt"
"log"
"net/http"

"elk-example/ent"
elk "elk-example/ent/http"

"github.com/go-chi/chi/v5"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)

func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "./ent.db?_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// 自動マイグレーションの実行
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Router and Logger.
r, l := chi.NewRouter(), zap.NewExample()
// Create the pet handler.
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l).Mount(r, elk.PetRoutes)
})
// Start listen to incoming requests.
fmt.Println("Server running")
defer fmt.Println("Server stopped")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}

次に、サーバーを起動します:

go run -mod=mod main.go

おめでとうございます! これで、Pets APIを提供する実行中のサーバーを手に入れました。 データベース内のすべてのペットの一覧をリクエストできますが、まだ何もありません。 まず一匹作成しましょう:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Kuro","age":3}' 'localhost:8080/pets'

次のようなレスポンスが返ってくると思います:

{
"age": 3,
"id": 1,
"name": "Kuro"
}

サーバーが起動しているターミナルに移動すると、 elkにロギングが組み込まれていることがわかります

{
"level": "info",
"msg": "pet rendered",
"handler": "PetHandler",
"method": "Create",
"id": 1
}

elkzapをロギングに使用します。 詳細については、zapのドキュメントをご覧ください。

リレーション

elkの機能をもっと説明するために、グラフを拡張しましょう。 ent/schema/user.goent/schema/pet.goを編集しましょう:

ent/schema/pet.go
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique(),
}
}

ent/schema/user.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age"),
}
}

// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type),
}
}

これで、PetスキーマとUserスキーマの間に一対多の関係ができました。PetはUserに属し、Userは複数のPetを持つことができます。

コードジェネレータを再実行:

go generate ./...

ルーターに UserHandler を登録することを忘れないでください。 main.go に次の行を追加するだけです:

[...]
r.Route("/pets", func(r chi.Router) {
elk.NewPetHandler(c, l, v).Mount(r, elk.PetRoutes)
})
+ // ユーザーハンドラーの作成
+ r.Route("/users", func(r chi.Router) {
+ elk.NewUserHandler(c, l, v).Mount(r, elk.UserRoutes)
+ })
// リクエストの待ち受けを開始
fmt.Println("Server running")
[...]

サーバーを再起動すると、以前に作成したKuroという名前のペットを所有する User を作成できます:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Elk","age":30,"owner":1}' 'localhost:8080/users'

サーバは以下の応答を返します:

{
"age": 30,
"edges": {},
"id": 1,
"name": "Elk"
}

出力結果から、ユーザーが作成されていることがわかりますが、エッジは空です。 elkは、デフォルトではエッジを出力に含めません。 エッジをレンダリングするようにelkを設定するには、"serialization groups "と呼ばれる機能を使用します。 elk.SchemaAnnotation構造体とelk.Annotation構造体を使用して、スキーマに注釈を付けます。 ent/schema/user.goを編集して、これらを追加してください。

// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.To("pets", Pet.Type).
Annotations(elk.Groups("user")),
}
}

// Annotations of the User.
func (User) Annotations() []schema.Annotation {
return []schema.Annotation{elk.ReadGroups("user")}
}

フィールドとエッジに追加されたelk.Annotationは、"user"グループがリクエストされた場合、それらをイーガーロードしてペイロードに追加するようelkに指示します。 elk.SchemaAnnotationは、UserHandlerリクエストのRead-Operationを"user"とするために使用されます。 Serialization groupsが添付されていないフィールドは、デフォルトで含まれていることに注意してください。 ただし、特に設定しない限り、 Edgeは除外されます。

次に、もう一度コードを再生成し、サーバーを再起動しましょう。 これで、レンダリングされたユーザーのペットが見えるはずです。

curl 'localhost:8080/users/1'
{
"age": 30,
"edges": {
"pets": [
{
"id": 1,
"name": "Kuro",
"age": 3,
"edges": {}
}
]
},
"id": 1,
"name": "Elk"
}

リクエストの検証

現在のスキーマでは、ペットやユーザーにマイナスの年齢を設定することや、(Kuroで行ったように) オーナーのいないペットを作成することができてしまいます。 Entには、基本的なバリデーションのサポートが組み込まれています。 しかし、場合によっては、Entにペイロードを渡す前に、APIに対するリクエストを検証したいことがあります。 elkこの パッケージを使用して検証ルールを定義し、データを検証します。 elk.Annotation を使用して、作成と更新操作のバリデーションルールを個別に設定できます。 今回の例では、Petスキーマで0以上の年齢のみを許可し、所有者のいないペットの作成を却下するとします。 ent/schema/pet.go を編集しましょう:

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Int("age").
Positive().
Annotations(
elk.CreateValidation("required,gt=0"),
elk.UpdateValidation("gt=0"),
),
}
}

// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.From("owner", User.Type).
Ref("pets").
Unique().
Required().
Annotations(elk.Validation("required")),
}
}

次に、コードを再生成し、サーバーを再起動します。 新しい検証ルールをテストするために、マイナスの年齢かつ所有者がいないペットを作成してみましょう:

curl -X 'POST' -H 'Content-Type: application/json' -d '{"name":"Bob","age":-2}' 'localhost:8080/pets'

elk は、どの検証に失敗したかについての情報を含む詳細なレスポンスを返します:

{
"code": 400,
"status": "Bad Request",
"errors": {
"Age": "This value failed validation on 'gt:0'.",
"Owner": "This value is required."
}
}

大文字のフィールド名に注目してください。 Validatorパッケージは、構造体のフィールド名を使用して検証エラーを生成しますが、にあるように、これを簡単にオーバーライドできます。

検証ルールを何も定義しなければ、elkは生成するコードに検証コードを含めません。 elksのリクエストバリデーションは、フィールドをまたいだ検証を行いたい場合に特に便利です。

今後追加される機能

elkにはすでにいくつかの便利な機能がありますが、まだまだエキサイティングなことがたくさんあります。 次のバージョンの elk には以下が含まれます:

  • ノードを管理するための十分に機能するflutterのフロントエンド
  • 現在のリクエストバリデーターとEntのバリデーションの統合
  • 他のトランスポートフォーマット(現在はJSONのみ)

まとめ

この記事では、elk ができることのほんの一部を紹介しました。 さらに用例を見たいなら、GitHubのプロジェクトのREADMEをご覧ください。 elk を搭載したEnt で、あなたやあなたの仲間の開発者が、RESTful APIの構築に必要な反復的な作業を自動化し、より意味のある仕事に集中できるようになることを願っています。

elk は開発の初期段階にありますので、ご提案やフィードバックをお待ちしています。ご協力いただける場合は、大変嬉しく思います。 ヘルプ、フィードバック、提案、貢献のために、GitHub Issuesをご活用ください。

著者について

MasseElchは、風が強く、平坦なドイツ北部出身のソフトウェアエンジニアです。 愛犬のKuro(自分のInstagramチャンネルを持っています 😱 ) とハイキングをしたり、息子とかくれんぼをしたりする時間以外は、コーヒーを飲みながらコーディングを楽しんでいます。

· 1 分で読む

Locks are one of the fundamental building blocks of any concurrent computer program. When many things are happening simultaneously, programmers reach out to locks to guarantee the mutual exclusion of concurrent access to a resource. Locks (and other mutual exclusion primitives) exist in many different layers of the stack from low-level CPU instructions to application-level APIs (such as sync.Mutex in Go).

When working with relational databases, one of the common needs of application developers is the ability to acquire a lock on records. Imagine an inventory table, listing items available for sale on an e-commerce website. This table might have a column named state that could either be set to available or purchased. avoid the scenario where two users think they have successfully purchased the same inventory item, the application must prevent two operations from mutating the item from an available to a purchased state.

How can the application guarantee this? Having the server check if the desired item is available before setting it to purchased would not be good enough. Imagine a scenario where two users simultaneously try to purchase the same item. Two requests would travel from their browsers to the application server and arrive roughly at the same time. Both would query the database for the item's state, and see the item is available. Seeing this, both request handlers would issue an UPDATE query setting the state to purchased and the buyer_id to the id of the requesting user. Both queries will succeed, but the final state of the record will be that the user who issued the UPDATE query last will be considered the buyer of the item.

Over the years, different techniques have evolved to allow developers to write applications that provide these guarantees to users. Some of them involve explicit locking mechanisms provided by databases, while others rely on more general ACID properties of databases to achieve mutual exclusion. In this post we will explore the implementation of two of these techniques using Ent.

Optimistic Locking

Optimistic locking (sometimes also called Optimistic Concurrency Control) is a technique that can be used to achieve locking behavior without explicitly acquiring a lock on any record.

On a high-level, this is how optimistic locking works:

  • Each record is assigned a numeric version number. This value must be monotonically increasing. Often Unix timestamps of the latest row update are used.
  • A transaction reads a record, noting its version number from the database.
  • An UPDATE statement is issued to modify the record:
    • The statement must include a predicate requiring that the version number has not changed from its previous value. For example: WHERE id=<id> AND version=<previous version>.
    • The statement must increase the version. Some applications will increase the current value by 1, and some will set it to the current timestamp.
  • The database returns the amount of rows modified by the UPDATE statement. If the number is 0, this means someone else has modified the record between the time we read it, and the time we wanted to update it. The transaction is considered failed, rolled back and can be retried.

Optimistic locking is commonly used in "low contention" environments (situations where the likelihood of two transactions interfering with one another is relatively low) and where the locking logic can be trusted to happen in the application layer. If there are writers to the database that we cannot ensure to obey the required logic, this technique is rendered useless.

Let’s see how this technique can be employed using Ent.

We start by defining our ent.Schema for a User. The user has an online boolean field to specify whether they are currently online and an int64 field for the current version number.

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.Bool("online"),
field.Int64("version").
DefaultFunc(func() int64 {
return time.Now().UnixNano()
}).
Comment("Unix time of when the latest update occurred")
}
}

Next, let's implement a simple optimistically locked update to our online field:

func optimisticUpdate(tx *ent.Tx, prev *ent.User, online bool) error {
// The next version number for the record must monotonically increase
// using the current timestamp is a common technique to achieve this.
nextVer := time.Now().UnixNano()

// We begin the update operation:
n := tx.User.Update().

// We limit our update to only work on the correct record and version:
Where(user.ID(prev.ID), user.Version(prev.Version)).

// We set the next version:
SetVersion(nextVer).

// We set the value we were passed by the user:
SetOnline(online).
SaveX(context.Background())

// SaveX returns the number of affected records. If this value is
// different from 1 the record must have been changed by another
// process.
if n != 1 {
return fmt.Errorf("update failed: user id=%d updated by another process", prev.ID)
}
return nil
}

Next, let's write a test to verify that if two processes try to edit the same record, only one will succeed:

func TestOCC(t *testing.T) {
client := enttest.Open(t, "sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
ctx := context.Background()

// Create the user for the first time.
orig := client.User.Create().SetOnline(true).SaveX(ctx)

// Read another copy of the same user.
userCopy := client.User.GetX(ctx, orig.ID)

// Open a new transaction:
tx, err := client.Tx(ctx)
if err != nil {
log.Fatalf("failed creating transaction: %v", err)
}

// Try to update the record once. This should succeed.
if err := optimisticUpdate(tx, userCopy, false); err != nil {
tx.Rollback()
log.Fatal("unexpected failure:", err)
}

// Try to update the record a second time. This should fail.
err = optimisticUpdate(tx, orig, false)
if err == nil {
log.Fatal("expected second update to fail")
}
fmt.Println(err)
}

Running our test:

=== RUN   TestOCC
update failed: user id=1 updated by another process
--- PASS: Test (0.00s)

Great! Using optimistic locking we can prevent two processes from stepping on each other's toes!

Pessimistic Locking

As we've mentioned above, optimistic locking isn't always appropriate. For use cases where we prefer to delegate the responsibility for maintaining the integrity of the lock to the databases, some database engines (such as MySQL, Postgres, and MariaDB, but not SQLite) offer pessimistic locking capabilities. These databases support a modifier to SELECT statements that is called SELECT ... FOR UPDATE. The MySQL documentation explains:

A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.

Alternatively, users can use SELECT ... FOR SHARE statements, as explained by the docs, SELECT ... FOR SHARE:

Sets a shared mode lock on any rows that are read. Other sessions can read the rows, but cannot modify them until your transaction commits. If any of these rows were changed by another transaction that has not yet committed, your query waits until that transaction ends and then uses the latest values.

Ent has recently added support for FOR SHARE/ FOR UPDATE statements via a feature-flag called sql/lock. To use it, modify your generate.go file to include --feature sql/lock:

//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature sql/lock ./schema 

Next, let's implement a function that will use pessimistic locking to make sure only a single process can update our User object's online field:

func pessimisticUpdate(tx *ent.Tx, id int, online bool) (*ent.User, error) {
ctx := context.Background()

// On our active transaction, we begin a query against the user table
u, err := tx.User.Query().

// We add a predicate limiting the lock to the user we want to update.
Where(user.ID(id)).

// We use the ForUpdate method to tell ent to ask our DB to lock
// the returned records for update.
ForUpdate(
// We specify that the query should not wait for the lock to be
// released and instead fail immediately if the record is locked.
sql.WithLockAction(sql.NoWait),
).
Only(ctx)

// If we failed to acquire the lock we do not proceed to update the record.
if err != nil {
return nil, err
}

// Finally, we set the online field to the desired value.
return u.Update().SetOnline(online).Save(ctx)
}

Now, let's write a test that verifies that if two processes try to edit the same record, only one will succeed:

func TestPessimistic(t *testing.T) {
ctx := context.Background()
client := enttest.Open(t, dialect.MySQL, "root:pass@tcp(localhost:3306)/test?parseTime=True")

// Create the user for the first time.
orig := client.User.Create().SetOnline(true).SaveX(ctx)

// Open a new transaction. This transaction will acquire the lock on our user record.
tx, err := client.Tx(ctx)
if err != nil {
log.Fatalf("failed creating transaction: %v", err)
}
defer tx.Commit()

// Open a second transaction. This transaction is expected to fail at
// acquiring the lock on our user record.
tx2, err := client.Tx(ctx)
if err != nil {
log.Fatalf("failed creating transaction: %v", err)
}
defer tx.Commit()

// The first update is expected to succeed.
if _, err := pessimisticUpdate(tx, orig.ID, true); err != nil {
log.Fatalf("unexpected error: %s", err)
}

// Because we did not run tx.Commit yet, the row is still locked when
// we try to update it a second time. This operation is expected to
// fail.
_, err = pessimisticUpdate(tx2, orig.ID, true)
if err == nil {
log.Fatal("expected second update to fail")
}
fmt.Println(err)
}

A few things are worth mentioning in this example:

  • Notice that we use a real MySQL instance to run this test against, as SQLite does not support SELECT .. FOR UPDATE.
  • For the simplicity of the example, we used the sql.NoWait option to tell the database to return an error if the lock cannot be acquired. This means that the calling application needs to retry the write after receiving the error. If we don't specify this option, we can create flows where our application blocks until the lock is released and then proceeds without retrying. This is not always desirable but it opens up some interesting design options.
  • We must always commit our transaction. Forgetting to do so can result in some serious issues. Remember that while the lock is maintained, no one can read or update this record.

Running our test:

=== RUN   TestPessimistic
Error 3572: Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
--- PASS: TestPessimistic (0.08s)

Great! We have used MySQL's "locking reads" capabilities and Ent's new support for it to implement a locking mechanism that provides real mutual exclusion guarantees.

Conclusion

We began this post by presenting the type of business requirements that lead application developers to reach out for locking techniques when working with databases. We continued by presenting two different approaches to achieving mutual exclusion when updating database records and demonstrated how to employ these techniques using Ent.

Have questions? Need help with getting started? Feel free to join our Discord server or Slack channel.

For more Ent news and updates:

· 1 分で読む

TL;DR

We added a new integration to the Ent GraphQL extension that generates type-safe GraphQL filters (i.e. Where predicates) from an ent/schema, and allows users to seamlessly map GraphQL queries to Ent queries.

For example, to get all COMPLETED todo items, we can execute the following:

query QueryAllCompletedTodos {
todos(
where: {
status: COMPLETED,
},
) {
edges {
node {
id
}
}
}
}

The generated GraphQL filters follow the Ent syntax. This means, the following query is also valid:

query FilterTodos {
todos(
where: {
or: [
{
hasParent: false,
status: COMPLETED,
},
{
status: IN_PROGRESS,
hasParentWith: {
priorityLT: 1,
statusNEQ: COMPLETED,
},
}
]
},
) {
edges {
node {
id
}
}
}
}

Background

Many libraries that deal with data in Go choose the path of passing around empty interface instances (interface{}) and use reflection at runtime to figure out how to map data to struct fields. Aside from the performance penalty of using reflection everywhere, the big negative impact on teams is the loss of type-safety.

When APIs are explicit, known at compile-time (or even as we type), the feedback a developer receives around a large class of errors is almost immediate. Many defects are found early, and development is also much more fun!

Ent was designed to provide an excellent developer experience for teams working on applications with large data-models. To facilitate this, we decided early on that one of the core design principles of Ent is "statically typed and explicit API using code generation". This means, that for every entity a developer defines in their ent/schema, explicit, type-safe code is generated for the developer to efficiently interact with their data. For example, In the Filesystem Example in the ent repository, you will find a schema named File:

// File holds the schema definition for the File entity.
type File struct {
ent.Schema
}
// Fields of the File.
func (File) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
field.Bool("deleted").
Default(false),
field.Int("parent_id").
Optional(),
}
}

When the Ent code-gen runs, it will generate many predicate functions. For example, the following function which can be used to filter Files by their name field:

package file
// .. truncated ..

// Name applies the EQ predicate on the "name" field.
func Name(v string) predicate.File {
return predicate.File(func(s *sql.Selector) {
s.Where(sql.EQ(s.C(FieldName), v))
})
}

GraphQL is a query language for APIs originally created at Facebook. Similar to Ent, GraphQL models data in graph concepts and facilitates type-safe queries. Around a year ago, we released an integration between Ent and GraphQL. Similar to the gRPC Integration, the goal for this integration is to allow developers to easily create API servers that map to Ent, to mutate and query data in their databases.

Automatic GraphQL Filters Generation

In a recent community survey, the Ent + GraphQL integration was mentioned as one of the most loved features of the Ent project. Until today, the integration allowed users to perform useful, albeit basic queries against their data. Today, we announce the release of a feature that we think will open up many interesting new use cases for Ent users: "Automatic GraphQL Filters Generation".

As we have seen above, the Ent code-gen maintains for us a suite of predicate functions in our Go codebase that allow us to easily and explicitly filter data from our database tables. This power was, until recently, not available (at least not automatically) to users of the Ent + GraphQL integration. With automatic GraphQL filter generation, by making a single-line configuration change, developers can now add to their GraphQL schema a complete set of "Filter Input Types" that can be used as predicates in their GraphQL queries. In addition, the implementation provides runtime code that parses these predicates and maps them into Ent queries. Let's see this in action:

Generating Filter Input Types

In order to generate input filters (e.g. TodoWhereInput) for each type in your ent/schema package, edit the ent/entc.go configuration file as follows:

// +build ignore

package main

import (
"log"

"entgo.io/contrib/entgql"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)

func main() {
ex, err := entgql.NewExtension(
entgql.WithWhereFilters(true),
entgql.WithConfigPath("../gqlgen.yml"),
entgql.WithSchemaPath("<PATH-TO-GRAPHQL-SCHEMA>"),
)
if err != nil {
log.Fatalf("creating entgql extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

If you're new to Ent and GraphQL, please follow the Getting Started Tutorial.

Next, run go generate ./ent/.... Observe that Ent has generated <T>WhereInput for each type in your schema. Ent will update the GraphQL schema as well, so you don't need to autobind them to gqlgen manually. For example:

ent/where_input.go
// TodoWhereInput represents a where input for filtering Todo queries.
type TodoWhereInput struct {
Not *TodoWhereInput `json:"not,omitempty"`
Or []*TodoWhereInput `json:"or,omitempty"`
And []*TodoWhereInput `json:"and,omitempty"`

// "created_at" field predicates.
CreatedAt *time.Time `json:"createdAt,omitempty"`
CreatedAtNEQ *time.Time `json:"createdAtNEQ,omitempty"`
CreatedAtIn []time.Time `json:"createdAtIn,omitempty"`
CreatedAtNotIn []time.Time `json:"createdAtNotIn,omitempty"`
CreatedAtGT *time.Time `json:"createdAtGT,omitempty"`
CreatedAtGTE *time.Time `json:"createdAtGTE,omitempty"`
CreatedAtLT *time.Time `json:"createdAtLT,omitempty"`
CreatedAtLTE *time.Time `json:"createdAtLTE,omitempty"`

// "status" field predicates.
Status *todo.Status `json:"status,omitempty"`
StatusNEQ *todo.Status `json:"statusNEQ,omitempty"`
StatusIn []todo.Status `json:"statusIn,omitempty"`
StatusNotIn []todo.Status `json:"statusNotIn,omitempty"`

// .. truncated ..
}
todo.graphql
"""
TodoWhereInput is used for filtering Todo objects.
Input was generated by ent.
"""
input TodoWhereInput {
not: TodoWhereInput
and: [TodoWhereInput!]
or: [TodoWhereInput!]

"""created_at field predicates"""
createdAt: Time
createdAtNEQ: Time
createdAtIn: [Time!]
createdAtNotIn: [Time!]
createdAtGT: Time
createdAtGTE: Time
createdAtLT: Time
createdAtLTE: Time

"""status field predicates"""
status: Status
statusNEQ: Status
statusIn: [Status!]
statusNotIn: [Status!]

# .. truncated ..
}

Next, to complete the integration we need to make two more changes:

1. Edit the GraphQL schema to accept the new filter types:

type Query {
todos(
after: Cursor,
first: Int,
before: Cursor,
last: Int,
orderBy: TodoOrder,
where: TodoWhereInput,
): TodoConnection!
}

2. Use the new filter types in GraphQL resolvers:

func (r *queryResolver) Todos(ctx context.Context, after *ent.Cursor, first *int, before *ent.Cursor, last *int, orderBy *ent.TodoOrder, where *ent.TodoWhereInput) (*ent.TodoConnection, error) {
return r.client.Todo.Query().
Paginate(ctx, after, first, before, last,
ent.WithTodoOrder(orderBy),
ent.WithTodoFilter(where.Filter),
)
}

Filter Specification

As mentioned above, with the new GraphQL filter types, you can express the same Ent filters you use in your Go code.

Conjunction, disjunction and negation

The Not, And and Or operators can be added using the not, and and or fields. For example:

{
or: [
{
status: COMPLETED,
},
{
not: {
hasParent: true,
status: IN_PROGRESS,
}
}
]
}

When multiple filter fields are provided, Ent implicitly adds the And operator.

{
status: COMPLETED,
textHasPrefix: "GraphQL",
}

The above query will produce the following Ent query:

client.Todo.
Query().
Where(
todo.And(
todo.StatusEQ(todo.StatusCompleted),
todo.TextHasPrefix("GraphQL"),
)
).
All(ctx)

Edge/Relation filters

Edge (relation) predicates can be expressed in the same Ent syntax:

{
hasParent: true,
hasChildrenWith: {
status: IN_PROGRESS,
}
}

The above query will produce the following Ent query:

client.Todo.
Query().
Where(
todo.HasParent(),
todo.HasChildrenWith(
todo.StatusEQ(todo.StatusInProgress),
),
).
All(ctx)

Implementation Example

A working example exists in github.com/a8m/ent-graphql-example.

Wrapping Up

As we've discussed earlier, Ent has set creating a "statically typed and explicit API using code generation" as a core design principle. With automatic GraphQL filter generation, we are doubling down on this idea to provide developers with the same explicit, type-safe development experience on the RPC layer as well.

Have questions? Need help with getting started? Feel free to join our Discord server or Slack channel.

For more Ent news and updates:

· 1 分で読む

数ヶ月前、私たちは EntスキーマからGo gRPC サーバーを生成する実験的な機能をアナウンスしました。 実装はまだ完了していませんでしたが、コミュニティに実験してもらい、フィードバックを提供してもらうために公開しました。

コミュニティから多くのフィードバックをいただいた結果、本日、EntgRPCの統合が「Ready for Usage」となりました。これは、基本的な機能がすべて完成していることを意味し、ほとんどのEntアプリケーションがこの統合を利用できることを期待しています。

最初の発表から何が追加されたのでしょうか?

  • "Optional Fields"のサポート - Protobufに共通する問題は、nil値の表現方法です。ゼロ値のプリミティブ・フィールドは、バイナリ表現にエンコードされません。 つまり、アプリケーションはプリミティブ・フィールドのゼロと「存在しない」を区別することができません。 これをサポートするために Protobuf プロジェクトは "Well-Known-Types" をサポートし、プリミティブ値を構造体でラップします。 これは以前はサポートされていませんでしたが、現在ではentprotoがProtobufメッセージ定義を生成する際に、これらのラッパー型を使用して"Optional"のentフィールドを表現します。

    // Code generated by entproto. DO NOT EDIT.
    syntax = "proto3";

    package entpb;

    import "google/protobuf/wrappers.proto";

    message User {
    int32 id = 1;

    string name = 2;

    string email_address = 3;

    google.protobuf.StringValue alias = 4;
    }
  • マルチエッジのサポート -
    protoc-gen-entgrpcの最初のバージョンをリリースした時、「ユニーク」なエッジに対するgRPCサービスの実装の生成のみをサポートしていました。 (つまり、1つだけのエンティティしか参照できませんでした)。 直近のバージョンから、 このプラグインは、O2MおよびM2Mの関連を持つエンティティを読み書きするためのgRPCメソッドの生成をサポートしています。

  • Partial responses - デフォルトでは、サービスのGetメソッドでエッジの情報は返されません。 これは、1つのエンティティに関連するエンティティの量が束縛されていないため、意図的に行われています。

    エッジの情報を返すかどうかを呼び出し側が指定できるように、生成されたサービスは、Google AIP-157(Partial Responses)に準拠しています。 つまり、 Get<T>Request メッセージには、Viewという名前のenum型が含まれており、このenum型によって、呼び出し側は、この情報をデータベースから取得するかどうかを制御できます。

    message GetUserRequest {
    int32 id = 1;

    View view = 2;

    enum View {
    VIEW_UNSPECIFIED = 0;

    BASIC = 1;

    WITH_EDGE_IDS = 2;
    }
    }

はじめましょう

より多くのEntのニュースと最新情報をお届けします