• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

Performance Best Practices with Gin

Lomanu4 Оффлайн

Lomanu4

Команда форума
Администратор
Регистрация
1 Мар 2015
Сообщения
1,481
Баллы
155

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



The Gin framework is the preferred choice for building network services in Go. As application complexity and traffic increase, performance becomes a factor that cannot be ignored. In this article, we will introduce a set of efficient tips for building services with Gin, covering everything from route optimization to memory reuse, request and response optimization, asynchronous processing, and performance analysis, to help you create a more stable and efficient web service.

Route Registration Optimization: Avoiding Circular References


Gin’s router uses a tree-based efficient routing implementation, which can quickly match request paths. However, if routes are registered improperly, such as with unclear nesting, circular references, or duplicate registrations, routing performance may be degraded.

Common Problems: Route Circular References and Registration Conflicts


When defining routes, unreasonable grouping or duplicate definitions may lead to performance degradation or functional anomalies. For example:


admin := r.Group("/admin")
admin.GET("/:id", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "admin route"})
})

// Error: Conflicts with the above route
r.GET("/admin/:id", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "conflicting route"})
})
Optimization Approach: Route Grouping and Consistent Management


Use route groups consistently:

Logically group related routes to avoid duplicate registrations.

Optimization example:


admin := r.Group("/admin")
{
admin.GET("/:id", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "admin with ID"})
})
admin.POST("/", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "create admin"})
})
}

Avoid conflicts between dynamic and static routes:

When dynamic routes (e.g., :id) and static routes (e.g., /edit) coexist, ensure the correct order of route definitions.

Optimization example:


r.GET("/users/edit", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "edit user"})
})
r.GET("/users/:id", func(c *gin.Context) {
c.JSON(200, gin.H{"user_id": c.Param("id")})
})
Memory Reuse and Object Pooling (sync.Pool)


Under high concurrency, frequent allocation and recycling of memory objects can lead to performance degradation, and may even put pressure on the garbage collector (GC). Go provides the sync.Pool object pool to reuse temporary objects and reduce garbage collection.

Usage Scenarios


In Gin, common temporary objects include results of JSON data parsing, storage of query parameters, and so on.

How to Use sync.Pool


sync.Pool provides a thread-safe object pool for storing reusable objects.

Example: Reusing JSON encoders/decoders


import (
"encoding/json"
"sync"
)

var jsonPool = sync.Pool{
New: func() interface{} {
return new(json.Encoder)
},
}

func handler(c *gin.Context) {
encoder := jsonPool.Get().(*json.Encoder)
encoder.Encode(map[string]string{"message": "hello"})
jsonPool.Put(encoder) // Return object to the pool
}
Built-in Reuse in Gin


Gin itself already uses some efficient internal designs, such as buffer reuse and static resource caching. Developers should make full use of the capabilities provided by the framework.

Performance Optimization for Requests and Responses

Scenario:


In high concurrency scenarios, the server needs to handle massive amounts of requests while ensuring response time is not affected. If not optimized, you may encounter increased latency or even request timeouts.

Optimization Strategies:


Connection Pool Optimization:

For high-concurrency database or external service requests, using a connection pool is crucial.

For database connection pools, you can configure them via gorm.Config, for example:


sqlDB, _ := db.DB()
sqlDB.SetMaxOpenConns(100) // Maximum number of connections
sqlDB.SetMaxIdleConns(20) // Maximum number of idle connections
sqlDB.SetConnMaxLifetime(time.Hour) // Maximum lifetime of a connection

Streamlining Middleware:

Reduce the number of global middleware and ensure each request only undergoes necessary processing.
Some time-consuming operations, such as logging, can be performed asynchronously:


r.Use(func(c *gin.Context) {
go func() {
log.Printf("Request from %s", c.ClientIP())
}()
c.Next()
})

If similar operations need to be performed for each request, batch methods can be used to reduce performance overhead. For example, logging and authentication can be combined into a single middleware.

JSON Serialization Optimization:

The default encoding/json library is relatively inefficient. You can replace it with the more efficient jsoniter:


import jsoniter "github.com/json-iterator/go"

var json = jsoniter.ConfigCompatibleWithStandardLibrary
func exampleHandler(c *gin.Context) {
data := map[string]string{"message": "hello"}
c.JSON(200, data) // Use jsoniter for serialization
}

Limiting Request Body Size:

Restrict the size of upload request bodies to reduce memory consumption:


r.Use(func(c *gin.Context) {
c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, 10*1024*1024) // Limit to 10MB
c.Next()
})

Cache Optimization:
Use Go’s built-in sync.Map or third-party libraries (such as Redis) for caching:


var cache sync.Map

func getCachedUser(id uint) (*User, error) {
if data, ok := cache.Load(id); ok {
return data.(*User), nil
}

var user User
if err := db.First(&user, id).Error; err != nil {
return nil, err
}

cache.Store(id, &user)
return &user, nil
}
Asynchronous Processing

Scenario:


Some tasks (such as file uploads, sending emails, data processing, etc.) may be very time-consuming. Processing them directly within the request will significantly increase response delay and affect performance.

Optimization Strategies:


Asynchronous Tasks:

Use Goroutines to move time-consuming tasks out of the main request flow.


r.POST("/upload", func(c *gin.Context) {
go func() {
// Time-consuming operation (e.g., store file)
}()
c.JSON(200, gin.H{"message": "Processing in background"})
})

Task Queue:

For more complex asynchronous tasks, use a message queue (such as Kafka or RabbitMQ) to enqueue tasks for worker threads to process.


// Example: Send task to queue
queue.Publish(task)

Rate-limited Asynchronous Tasks:

Limit the number of Goroutines for asynchronous tasks to avoid excessive resource usage.

Below is a simple rate limiting example using Go’s extended library Semaphore to control the number of concurrently running goroutines. In real-world applications, you may need to optimize based on business scenarios:


import "golang.org/x/sync/semaphore"

var sem = semaphore.NewWeighted(10) // Max concurrency of 10

func processTask() {
if err := sem.Acquire(context.Background(), 1); err == nil {
defer sem.Release(1)
// Execute task
}
}
Using pprof to Analyze Performance Bottlenecks


Go provides the powerful net/http/pprof tool to analyze runtime performance, including CPU usage, memory allocation, and Goroutine execution.

Enabling pprof


By importing the net/http/pprof package, you can quickly start performance analysis tools:


import _ "net/http/pprof"

func main() {
r := gin.Default()

go func() {
// Start Pprof service
http.ListenAndServe("localhost:6060", nil)
}()

r.GET("/", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "hello"})
})
r.Run(":8080")
}

You can access the following addresses to view performance data:

Generating Performance Reports


Use the pprof tool to generate performance reports and visualize analysis:


go tool pprof

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



In the interactive interface, you can use top to view hotspot functions, or web to generate visual reports (Graphviz installation required).

Best Practices Summary


In this article, we introduced several tips and optimization methods to improve the performance of Gin. Here are some key best practices to further optimize your Gin applications:

Route Optimization

  • Avoid Route Conflicts: Ensure route registrations are clear, and avoid conflicts between dynamic and static routes. By grouping routes logically, you can simplify route structures and reduce unnecessary routing overhead.
  • Grouped Routes: Manage related routes through groups to improve code maintainability and avoid duplicate registrations.
Memory Reuse

  • Use sync.Pool Object Pools: In high-concurrency environments, use sync.Pool to reuse memory objects, avoid frequent memory allocations and garbage collection, and reduce GC pressure.
  • Leverage Built-in Framework Features: Gin has implemented many optimizations internally, such as buffer reuse and static resource caching. Developers should make the most of these built-in capabilities.
Request and Response Optimization

  • Connection Pool Management: For database or external service requests, configure reasonable connection pools to reduce the overhead of connection creation and destruction, thus improving request response speed.
  • Streamline Middleware: Reduce unnecessary middleware and ensure that each request goes through only essential processing. By making time-consuming operations asynchronous, you can minimize the delay in the main request flow.
  • Use Efficient JSON Serialization: Use more efficient JSON serialization libraries (such as jsoniter) to replace Go’s standard encoding/json library, thereby improving the performance of JSON serialization and deserialization.
Asynchronous Processing

  • Make Time-Consuming Operations Asynchronous: For time-consuming operations such as file uploads and sending emails, use Goroutines for background asynchronous processing to avoid blocking the request flow.
  • Use Message Queues for Complex Asynchronous Tasks: For complex tasks, use message queues (such as Kafka or RabbitMQ) to enqueue tasks, allowing independent worker threads to process them.
Performance Analysis

  • Use pprof for Performance Analysis: By importing the net/http/pprof package, you can quickly enable performance analysis tools to examine CPU usage, memory allocation, and Goroutine execution. Use performance reports to identify hotspot functions and further optimize performance.

By applying the above techniques, you can gradually improve the performance and stability of services built on the Gin framework.


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.





Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:

Multi-Language Support

  • Develop with Node.js, Python, Go, or Rust.

Deploy unlimited projects for free

  • pay only for usage — no requests, no charges.

Unbeatable Cost Efficiency

  • Pay-as-you-go with no idle charges.
  • Example: $25 supports 6.94M requests at a 60ms average response time.

Streamlined Developer Experience

  • Intuitive UI for effortless setup.
  • Fully automated CI/CD pipelines and GitOps integration.
  • Real-time metrics and logging for actionable insights.

Effortless Scalability and High Performance

  • Auto-scaling to handle high concurrency with ease.
  • Zero operational overhead — just focus on building.

Explore more in the

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

!


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



Follow us on X:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.




Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.




Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу