Last week, while working on a REST API project, I encountered an interesting situation. When fetching data from PostgreSQL using Dapper, some queries were running slower than expected. I knew Dapper to be fast, but this time, something was wrong.
Dapper is known as a micro ORM in the .NET world and is quite popular. It’s not as heavy as Entity Framework, giving you full control over SQL. I usually use Dapper in my REST API projects because of its performance and my preference for writing SQL 🙂
Why Dapper?
There are several reasons I prefer Dapper over Entity Framework. First, you write SQL queries yourself, which gives you complete control. Second, performance is genuinely good, especially with large datasets. Third, it has a simple and easy-to-learn structure.
Of course, everything has its trade-offs. In Dapper, you need to write SQL, which can sometimes be cumbersome. But I consider this an advantage because you know exactly what’s happening at the database level.
It works well with PostgreSQL. With the Npgsql library, you establish a connection and run queries with Dapper. The setup is simple: you install Dapper and Npgsql packages from NuGet, and that’s it.
Simple Query Example
Let me give a simple example. Suppose you want to fetch data from a user table. First, you place your connection string in appsettings.json, then execute your query with Dapper, and finally get the results. It’s that easy, taking about 5 minutes.
The typical code looks like this:
using (var connection = new NpgsqlConnection(connectionString))
{
var users = await connection.QueryAsync<User>(“SELECT * FROM users WHERE active = @Active”, new { Active = true });
return users.ToList();
}
As you see, SQL is written directly, and parameters are sent as an anonymous object. Dapper automatically maps the results to your User class. Isn’t this pretty practical?
Performance Issues and Solutions
Let’s return to the performance issues I initially mentioned. When investigating the slow queries, I noticed a few things. First, missing indexes were slowing down queries. Second, some queries had unnecessary JOINs, which I cleaned up. Third, I optimized connection pooling settings.
By the way, I realized that the issue last time was probably similar — forgetting indexes is a common mistake 🙂
To create indexes in PostgreSQL, you use the CREATE INDEX command. For example, to add an index to the email column in the users table, you do:
CREATE INDEX idx_users_email ON users(email);
After adding indexes, query speed improved significantly. I think a query that took 3-4 seconds decreased to around 200-300 milliseconds, but I don’t remember exactly — there was a big difference.
Connection Pooling Settings
Connection pooling is also important. Npgsql uses pooling by default, but sometimes you need to optimize the settings. You can add Minimum Pool Size and Maximum Pool Size parameters to your connection string.
For example, you can use a connection string like:
“Host=localhost;Database=mydb;Username=postgres;Password=12345;Minimum Pool Size=5;Maximum Pool Size=20;”
This configuration keeps 5 to 20 connections in the pool. Instead of opening a new connection each time, you get one from the pool, which boosts performance.
Of course, these values depend on your project. For small projects, 5-20 is enough, but for larger systems, you might need 50-100 or more. I typically use 10-30, and it works well.
Using Async/Await
Using async/await with Dapper is also very crucial. Especially in REST APIs, asynchronous operations significantly boost performance. You can use methods like QueryAsync and ExecuteAsync for asynchronous queries.
For example, to add a user asynchronously, you can write:
public async Task<int> AddUserAsync(User user)
{
using (var connection = new NpgsqlConnection(connectionString))
{
var sql = “INSERT INTO users (name, email, active) VALUES (@Name, @Email, @Active) RETURNING id;”;
var id = await connection.ExecuteScalarAsync<int>(sql, user);
return id;
}
}
The “RETURNING id” part is PostgreSQL-specific. It returns the newly created ID directly after insertion, which is very handy (and I think very important).
When using async, remember to open and close the connection within the using block to automatically handle closing. Forgetting this can cause connection leaks and slowdowns.
Multiple Result Sets
Sometimes, you may want to fetch multiple result sets in a single query. Dapper’s QueryMultiple method allows this. For example, to fetch user details and their orders simultaneously:
using (var connection = new NpgsqlConnection(connectionString))
{
var sql = “SELECT * FROM users WHERE id = @Id; SELECT * FROM orders WHERE user_id = @Id;”;
using (var multi = await connection.QueryMultipleAsync(sql, new { Id = userId }))
{
var user = await multi.ReadSingleAsync<User>();
var orders = await multi.ReadAsync<Order>();
return (user, orders.ToList());
}
}
You write two queries into one and execute them at once. This reduces the number of database trips and improves performance. It’s not always necessary but can be useful sometimes.
I wrote this code while sitting at a cafe in Bursa, probably while enjoying a coffee 🙂 Anyway, I went a bit off-topic.
Using Stored Procedures
Stored procedures in PostgreSQL are also usable. To call a stored procedure using Dapper, you set the CommandType parameter to CommandType.StoredProcedure.
using (var connection = new NpgsqlConnection(connectionString))
{
var result = await connection.QueryAsync<User>(“get_active_users”, commandType: CommandType.StoredProcedure);
return result.ToList();
}
Stored procedures are handy for complex operations. For example, if you perform actions on multiple tables, you can put SQL code into a stored procedure and call it once.
I personally don’t use them often, preferring to keep SQL code on the C# side for easier management. But this is a personal preference, and others might think differently.
Transaction Management
Using transactions is also critical. Particularly when doing multiple operations, you need to start a transaction. In Dapper, you use the IDbTransaction interface.
using (var connection = new NpgsqlConnection(connectionString))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
try
{
await connection.ExecuteAsync(“INSERT INTO users …”, user, transaction);
await connection.ExecuteAsync(“INSERT INTO orders …”, order, transaction);
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
}
}
After starting a transaction, you pass the transaction object to your commands. If an error occurs, you call rollback; if all is successful, you call commit. This ensures data consistency. Omitting transaction management can lead to inconsistency, like user added but order not added. Transactions guarantee that either everything succeeds or nothing does.
Conclusion and Recommendations
Using Dapper with PostgreSQL is really practical. It’s fast, simple to use, and gives you full control over SQL. You can comfortably develop REST APIs with this setup.
Here are a few tips: first, don’t forget indexes — they are crucial for performance. Second, always use async/await, especially in API projects. Third, optimize your connection pooling settings. Fourth, use transactions to maintain data integrity.
For more detailed information, search “Dapper PostgreSQL best practices” on Google. The Dapper official documentation on GitHub is also very helpful.
Overall, learning Dapper is straightforward. Practice for a few days, and you’ll get the hang of it. I initially struggled a bit but now use it comfortably. You can try it too; I believe you’ll be satisfied 🙂