When joining an existing project, most of these decisions are already made, and it's easy for individual contributors to go with the status quo. Having the knowledge to build one from scratch will not only give you more freedom to build your own apps, but will also provide you with a better understanding of why your existing codebase is built the way it is.
This blog post will walk through the steps of bootstrapping a new Golang backend:
- Choosing an HTTP framework
- Designing a scalable folder structure
- Establishing good practices for a test suite
This post will also discuss some of the not-so-evident choices that need to be made along the way, along with some thoughts from my 8th Light colleagues on an interesting test-related question.
This post assumes you have GO
properly installed and ready to use. Without further ado, let's get right to it!
Choosing An HTTP Framework
There are many HTTP frameworks available for Golang projects, including Echo, Iris, Beego, and Gin. When choosing a framework, developers consider factors such as execution speed, clarity, popularity, maintenance, and the developer's own experience. Popularity and maintenance go hand in hand: the more popular it is, the more people are interested in actively maintaining it. When picking an open source dependency, it is better to choose one that will get bug fixes when needed, or address newly surfaced security issues. The developer's experience has to be taken into account as well to make the most out of your resources, although it is better not to base decisions solely on it.
If you are building a prototype, perhaps a more robust framework suits you better than a lightweight one — that would be Iris. If you are building a long-term product, maybe Echo is a better choice, as you get finer control over the behavior.
This was the popularity status of these frameworks as of May 2022:
Framework | Stars | Forks | First release | Last release |
---|---|---|---|---|
Gin-Gonic | 58.8k | 6.6k | May 22th, 2015 | November 24th, 2021 |
Beego | 28.2k | 5.5k | May 19th, 2017 | April 25th, 2022 |
Echo | 22.4k | 2k | April 1st, 2015 | March 16, 2022 |
Iris | 22.3k | 2.4k | October 26, 2019 | May 8, 2022 |
Gin claims to be faster than other frameworks due to its use of httprouter
. It is minimal and intuitive, has tagged releases and continuous deployments, and it also enjoys the biggest community of them all. It serves our needs just well. Note that all the other frameworks have had most recent releases than Gin-Gonic, which might be interesting to know why.
In the end, you can build the same applications in any framework. As your app scales, it will become increasingly difficult to change frameworks, and you may start to wonder if the one you chose was the right one for the project. However, there is no way of knowing a priori the challenges that you fill face in the future, and making a choice and getting started was better than spending days trying to learn the details of all available choices.
This example will use Gin-Gonic as its framework due to its huge community, speed, and the fact that it still has releases. Now we just go get it:
go get github.com/gin-gonic/gin
The next step is to design a scalable folder structure and lay out the project's source code. Our choice of framework will help along the way.
Designing A Scalable Folder Structure
Golang recommends using a flat architecture. As flat as possible. This means you can start adding .go
files right in your root folder. However, at 8th Light we've seen how tossing Go files and folders in the root directory can make further development more complicated. Check out golang-standards/project-layout for some ideas on how to organize large Go projects.
The root folder also has some files that are not part of the source code, or even part of the application logic itself. You will have README.md
, .gitignore
, go.mod
and go.sum
right from the start, without mentioning other directories like /vendor
and /.github
. However, it is useful to have at least one .go
file in the root, as it will declare your package main
, and it can be picked by automatic runners like Google Cloud's App Engine with its default settings.
All of these problems are solved with the following structure. Imagine an app with two different HTTP resources, users
and pings
:
root
│ README.md
│ go.mod
│ go.sum
│ .gitignore
│ ...
│ app.go
│
└── app/
│ │ launcher.go
│ │
│ └── users/
│ │ │ router.go
│ │ │ controller.go
│ │ │ ...
│ │
│ └── pings/
│ │ router.go
│ │ controller.go
│ │ ...
│
└── config/
│ │ config.go
│ │ environment.yaml
│
└── db/
│ │ schema.sql
│ │
│ └── migrations/
│ └── seeds/
└── vendor/
This structure allows all of the user-related routes to be built in its own router, which the app launcher can use alongside the pings router to build the entire app. When testing the users' resources, one can just build the users' routes independently from the rest of the app.
Following the ports and adapters architecture pattern (or hexagonal architecture), interfaces should be defined where they are used, not where their functions are defined. It is a recurrent anti-pattern to write Go interfaces similarly to Java interfaces, which is a common example of Go interface misuse.
Thinking about the /app
as the folder containing exclusively the relation between business logic and HTTP, then some things should remain separate. The /config
folder can take care of things like reading environment variables or parsing a .yaml
configuration file. The config folder is separated from the main app, as it shouldn't be aware of any environment variables or specific configuration. The /db
folder is similar in this regard: it can be implemented with any other external tool, and the migrations and schema shouldn't be part of the app.
But how does the ports and adapters architecture work, and how does it look in this folder structure? Lets start with the file app/users/router.go
:
package users
import "github.com/gin-gonic/gin"
// Interface declaring methods of the users controller
type Controllable interface {
GetUser(c *gin.Context)
}
// Add all user routes to the given gin.Engine
func Routes(engine *gin.Engine, controllable Controllable) {
engine.GET("/users/:id", controllable.GetUser)
}
The function Routes
adds the path /users/:id
to the received engine
. This function allows you to use composition and build all of the routes in your app, or just build the users' routes when testing the users
package.
Adding a similar file in app/pings/router.go
, we can have:
package pings
import "github.com/gin-gonic/gin"
// Interface declaring methods of the ping controller
type Pingable interface {
Ping(c *gin.Context)
}
// Add all ping routes to the given gin.Engine
func Routes(engine *gin.Engine, pingable Pingable) {
engine.GET("/ping", pingable.Ping)
}
Notice that both files declare the interface they are using. The implementation of this interface exists in each of the corresponding controller.go
files. This allows you to mock the controller in the tests.
It all comes together in the app/launcher.go
file:
package app
import (
"my_project/app/pings"
"my_project/app/users"
"github.com/gin-gonic/gin"
)
// Launches the app in localhost:8080
func Launch() {
var err error
router := gin.Default()
pings.Routes(router, pings.Controller{})
users.Routes(router, users.Controller{})
address := "localhost:8080"
err = router.Run(address)
if err != nil {
panic(fmt.Errorf("failed to listen in %s: %w", address, err))
}
}
The implementation of the controllers is not important for the time being. You can use the constructing pattern of the launcher.go
file to build tests for your routes and controllers.
Establishing Good Practices For A Test Suite
Go comes with an embedded minimal testing framework. Although it works great, it can be extended with BDD frameworks like Goblin and Ginkgo, and/or with assertion libraries like Gomega and stretch/testify.
The tradeoff made by using BDD frameworks is that parallelization of the tests becomes less evident and manageable: The BeforeEach
hook is shared across unit tests, so there might be conflicts when parallelizing across unit tests, it is either that or your test suite will end up being completely sequential. No one likes that. If we had to choose one BDD framework, we would go with the Ginkgo+Gomega combo, as it is easily read and has support for data generators, mocks, and grouping.
In this blog we are using only the stretch/testify
assertion library as it is much more popular than Gomega
alone (as for May 2022):
Library | Stars | Forks |
---|---|---|
stretch/testify | 16.3k | 1.2k |
onsi/gomega | 1.6k | 245 |
> go get github.com/stretchr/testify
At this point we can pause and consider some edge cases, beginning with good practices when handling your test database. Test databases should always be in a known state, probably empty; and even though it would be amazing to create a new database for every tested package in every run, it is more practical to test everything in the same database and design your tests so that they avoid reaching the database, using mockups and test doubles.
In the event that your test does need to reach the database (e.g. when testing the read/write database operations), then it should enclose everything in a database transaction that is rolled back upon completion. This allows parallel testing without conflicts, as you can read from inside the current uncommitted transaction, but not from the outside, or from another test.
Another controversial topic is whether or not to test private methods. In Go, this translates to whether creating tests inside the same package where the code is defined, or on a different _test
package:
// Source code
package add
func Two(a int, b int) int {
return a + b
}
This is how it looks testing as an exported method, from the outside:
// Test file
package add_test
import (
"my_app/add"
"testing"
"github.com/stretchr/testify/assert"
)
func TestTwo(t *testing.T) {
assert.Equal(t, add.Two(1, 1), 2)
}
And how it looks tested from the inside:
package add
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestTwo(t *testing.T) {
assert.Equal(t, Two(1, 1), 2)
}
Testing from the same package becomes a problem when you inevitably end up testing private (unexported) functions directly. In the end, private methods are still code, right? And all code should be tested.
However, tests themselves are code too, and as all code needs to be maintained you will end up needing to make lots of code modifications when a requirement changes. Private methods should be used as a way of supporting internal and simple functionality. If you find yourself in a situation where you feel unsafe leaving a private method untested, then that is a good sign that such method should be part of its own package, where you can test it from there.
I asked some of my colleagues at 8th Light about their opinion on this topic. This is what they said:
Don't test [private methods] explicitly. By testing only via the public interface you will inevitably end up testing the private ones anyways. This is the main reason why you often see people complaining about refactoring creating a mess. Because when tests are structured and dependant on the *internal* structure of the thing you're testing you will never be able to freely refactor and *not* have to update a bunch of tests. — Christoph Gockel, principal crafter
For me, when it comes to testing private methods, the "should" questions are not as relevant as the "is" questions. In most cases, the code and the tests already exist and you have to live in that reality. Is refactoring the difficult to test private code the ideal solution? Surely. But it's also risky because as we established, it's currently untested. So is it the current "best" use of your time and money to do that work right away? That's not nearly as sure in every case. — Brian Porter, principal crafter
When you have decided you want a private method, and you know what it is gonna be, [and] you want to test it, that's a case where you're doing TDD wrong, by putting the cart before the horse. The test isn't *driving* anything - you already decided what you want and you're trying to test it. It's test-after with extra steps. — Eric Smith, principal crafter
If something private inside an encapsulation is complex enough that testing it would be beneficial, take that opportunity to identify if it can be described as a specific domain and pull the logic into that and test. — Hank Lin, senior crafter
For more in-depth advice on what it means to build useful tests, check out my colleague Robert Wenner's awesome 8th Light University video: 100% Test Coverage.
Legacy code bases might not facilitate such a free decision-making environment, but as we are building from scratch, we have the privilege of choosing the best practices for our app. This time that means testing only exported methods.
Wrapping Up
This blog post walked through some decision making as we created a new Golang backend from scratch, doing research on the existing tools and adding our experience on top of it. This structure should give you a robust enough codebase where you and your teammates can just join and start coding right away, following a well-designed status-quo.