Introduction
Bolt is a protocol used by applications to connect to Neo4j. Monitoring Bolt connections helps you identify connection pooling issues and optimize application connectivity.
In this lesson, you’ll learn how to monitor Bolt connections to ensure your applications are connecting to Neo4j efficiently.
Understanding Bolt Connections
Bolt is a stateful protocol that runs on port 7687. Bolt connections represent client applications connected to your database. Each connection can be actively executing queries or sitting idle in a connection pool waiting to be used.
Bolt connections are stateful, meaning they maintain context between messages. When a connection starts a transaction, it holds that connection until the transaction completes. This is why properly closing transactions is important for freeing up connection resources.
Applications use connection pools to reuse connections for multiple queries. This reduces the overhead of establishing new connections for every query. A well-configured connection pool maintains idle connections ready for immediate use.
Monitoring Bolt Connection Metrics
Bolt connection metrics are available in the Metrics dashboard on the Instance tab.
Aura provides several metrics to help you understand how applications connect to your database.
Running Connections
Running connections shows the number of connections currently executing queries. This value changes constantly as queries start and finish. It should scale with your workload, increasing during high activity and decreasing during low activity.
Idle Connections
Idle connections shows the number of connections sitting in connection pools waiting to be used. A stable number of idle connections indicates healthy connection pooling. These connections are ready for immediate use when a query needs to execute.
Idle connections do not consume significant server resources - they simply wait for messages from the client. The connection becomes active when it starts processing a query or transaction.
Opened and Closed Connections
Opened and closed metrics track the number and rate of connections being created and terminated. The opened count includes both successful and failed connection attempts. The closed count includes both properly closed connections and abnormal terminations.
A healthy application maintains balanced open and close rates over time. High connection churn with many connections constantly opening and closing indicates the application may not be using connection pooling properly.
If you see a sudden spike in failed connection attempts (opened connections that fail immediately), this could indicate:
-
The connection limit has been reached
-
Network connectivity issues between the application and database
-
Authentication problems with application credentials
Managing Connection Health
Healthy connection patterns show stable idle connections in the pool, running connections that scale with workload, and balanced open and close rates. This indicates applications are reusing connections efficiently rather than creating new connections for every query.
High connection churn occurs when connections constantly open and close with few idle connections maintained. This indicates connection pooling may not be configured properly. Connection churn increases overhead and reduces performance because establishing new connections is expensive.
If you see many more connections opened than closed over time, this may indicate a connection leak. Applications should close connections properly when finished with them. Connection leaks can lead to pool exhaustion where no connections are available for new queries.
Managing Connection Health
Work with your development team to ensure applications use proper connection handling patterns for their programming language (such as try-finally blocks in Java or context managers in Python).
Each Aura instance has connection limits based on instance tier and size. You can view your instance’s connection limit in the Aura console under the instance Details page.
When the connection limit is reached, new connection attempts are rejected with an error message. This can cause application failures or degraded performance as users are unable to connect to the database.
Monitor your connection metrics during peak hours to ensure you stay well below these limits. A good practice is to keep maximum usage at 80% or less of your instance’s connection limit to allow for traffic spikes.
Scaling Your Instance
If your applications frequently hit connection limits, first review your connection pool configuration with your development team.
Key connection pool settings to review include:
-
Maximum pool size - The maximum number of connections the application can open. Should be less than your instance’s connection limit.
-
Minimum pool size - The number of connections to keep ready. Should match typical concurrent query load.
-
Connection timeout - How long to wait for an available connection before failing.
-
Idle connection lifetime - How long to keep unused connections before closing them.
If connection pools are optimized and limits are still reached, scale your instance to support more concurrent connections.
Configuring connection pools
Refer to your Neo4j driver documentation for connection pool configuration options specific to your programming language.
Check Your Understanding
Connection Pool Purpose
What is the primary purpose of using connection pools with Neo4j?
-
❏ To increase the maximum number of concurrent queries
-
✓ To reuse connections and reduce connection establishment overhead
-
❏ To automatically close connections after each query
-
❏ To prevent connection leaks in application code
Hint
Think about why establishing a new connection for every query would be inefficient.
Solution
To reuse connections and reduce connection establishment overhead is correct.
Connection pools maintain idle connections that can be reused for multiple queries. Establishing a new connection for every query is expensive because it requires authentication, network setup, and resource allocation. Reusing connections from a pool eliminates this overhead.
Increasing concurrent queries is not the primary purpose; that depends on instance limits. Automatically closing connections is incorrect; pools keep connections open for reuse. Preventing connection leaks is an application responsibility, not a pool feature.
Connection Churn Indicator
What indicates high connection churn that suggests connection pooling is not configured properly?
-
❏ High number of idle connections with low open/close rates
-
❏ Running connections that increase during high activity
-
✓ High open and close rates with few idle connections
-
❏ Balanced open and close rates over time
Hint
Think about what happens when connections are created for every query instead of being reused.
Solution
High open and close rates with few idle connections is correct.
High connection churn occurs when applications constantly create and close connections instead of reusing them from a pool. This creates high open and close rates with few connections sitting idle in a pool. Connection churn is expensive because establishing new connections requires authentication, network setup, and resource allocation.
High idle connections with low rates indicates good pooling. Running connections increasing is normal workload scaling. Balanced open/close rates is healthy when combined with stable idle connections.
Summary
In this lesson, you learned how to monitor Bolt connection metrics to ensure your applications connect to Neo4j efficiently:
-
Running connections - Connections currently executing queries that should scale with workload
-
Idle connections - Connections waiting in pools, ready for immediate use
-
Opened and closed metrics - Track connection creation and termination rates
Healthy connection patterns show stable idle connections, running connections that scale with workload, and balanced open and close rates. High connection churn with many connections constantly opening and closing indicates connection pooling is not configured properly.
In the next lesson, you will learn about garbage collection metrics and their impact on performance.