Challenge: Managing your catalog
Throughout this module, you’ve created many graph projections for learning purposes. Now it’s time to practice good memory management by cleaning up your catalog.
Before you begin, let’s recap what you’ve learned about graph catalog operations:
Quick Recap
You’ve learned three essential operations for managing graphs in the catalog.
Listing graphs shows you all projections currently in memory:
CALL gds.graph.list() // (1)
YIELD graphName, nodeCount, relationshipCount, memoryUsage // (2)
RETURN graphName, nodeCount, relationshipCount, memoryUsage // (3)-
Call the graph list procedure
-
Yield graph statistics including memory usage
-
Return the graph information
Checking existence verifies whether a specific graph is available:
CALL gds.graph.exists('graph-name') // (1)
YIELD graphName, exists // (2)
RETURN graphName, exists // (3)-
Call the exists procedure with a graph name
-
Yield the graph name and existence status
-
Return the result
Dropping graphs removes them from memory and frees up resources:
CALL gds.graph.drop('graph-name') // (1)
YIELD graphName, memoryUsage // (2)
RETURN graphName, memoryUsage // (3)-
Call the drop procedure with a graph name
-
Yield the graph name and memory freed
-
Return the dropped graph information
Now let’s put these operations to work.
Your Challenge
Complete the following tasks to clean up your graph catalog:
Task 1: List all graphs currently in your catalog and note how much memory they’re using.
Task 2: Drop one graph from your catalog to practice the basic operation.
Task 3: Now drop all remaining graphs at once using a single query. You’ll need to combine list and drop operations.
Task 4: Verify that your catalog is empty by listing all graphs again.
Hints
For Task 1, you’ll want to see the memoryUsage field to understand how much space each graph is consuming.
For Task 2, choose any graph and drop it using gds.graph.drop().
For Task 3, here’s a template to help you drop all remaining graphs at once. Think about how you can use the output from one operation as input to another:
CALL gds.graph.?????() // (1)
YIELD ????? // (2)
CALL gds.graph.?????(?????) // (3)
YIELD graphName AS droppedGraph // (4)
RETURN droppedGraph // (5)-
Call the graph list procedure (fill in procedure name)
-
Yield the graph name (fill in field name)
-
Call the drop procedure with the graph name (fill in procedure and parameter)
-
Yield the dropped graph name
-
Return the list of dropped graphs
For Task 4, when you list graphs after dropping them all, you should see an empty result.
Solution
Details
Task 1: List all graphs
CALL gds.graph.list() // (1)
YIELD graphName, nodeCount, relationshipCount, memoryUsage // (2)
RETURN graphName, nodeCount, relationshipCount, memoryUsage // (3)
ORDER BY graphName ASC // (4)-
Call the graph list procedure
-
Yield graph statistics including memory usage
-
Return the graph information
-
Sort by graph name alphabetically
This shows you all graphs in your catalog with their memory usage.
Task 2: Drop one graph
CALL gds.graph.drop('actor-collaboration') // (1)
YIELD graphName, memoryUsage // (2)
RETURN graphName, memoryUsage // (3)-
Call the drop procedure to remove 'actor-collaboration'
-
Yield the graph name and memory freed
-
Return the dropped graph information
Choose any graph name from your catalog. The procedure returns information about the dropped graph.
Task 3: Drop all remaining graphs at once
CALL gds.graph.list() // (1)
YIELD graphName // (2)
CALL gds.graph.drop(graphName) // (3)
YIELD graphName AS droppedGraph // (4)
RETURN droppedGraph // (5)-
Call the graph list procedure
-
Yield each graph name
-
Drop each graph by piping the name to the drop procedure
-
Yield the dropped graph name
-
Return the list of all dropped graphs
This query lists all graphs and drops each one in a single operation. The key insight is that you can use the graphName output from list() as the input to drop().
Task 4: Verify catalog is empty
CALL gds.graph.list() // (1)
YIELD graphName // (2)
RETURN graphName // (3)-
Call the graph list procedure
-
Yield graph names
-
Return graph names (should be empty)
This should return no results, confirming all graphs have been removed.
Key takeaway: Chaining operations like this is a powerful pattern in Cypher. You can use the output of one procedure as input to another, enabling complex workflows in a single query.
Check your understanding
Chaining graph operations
In Task 3, you dropped all graphs at once by chaining gds.graph.list() and gds.graph.drop() in a single query.
Why does this pattern work?
-
❏ GDS has a special "drop all" function that runs automatically
-
✓ The
graphNameoutput fromlist()becomes the input todrop()for each graph -
❏ Cypher automatically detects when you want to drop multiple graphs
-
❏ The
YIELDclause tells GDS to process all graphs in batch mode
Hint
Look at the query structure: what does YIELD graphName from list() provide to the next CALL statement?
Solution
The graphName output from list() becomes the input to drop() for each graph.
This is a fundamental Cypher pattern: using the output of one procedure as input to another.
Here’s how it works:
-
gds.graph.list()returns a row for each graph with itsgraphName -
For each row, the
graphNamevalue is passed togds.graph.drop() -
Each graph gets dropped individually, but within a single query
This pattern is powerful because:
-
You don’t need to know graph names in advance
-
You can filter or transform data between operations
-
You can build complex workflows in a single query
You’ll use this chaining pattern frequently when working with GDS: listing graphs to check memory, filtering by properties, then operating on the filtered results.
Summary
You’ve successfully cleaned up your graph catalog by listing all graphs, dropping them to free memory, and verifying they’re gone. These operations are essential for efficient memory management in production workflows.
You’ve now completed all the fundamental concepts: projections, graph types, and catalog management. In the next module, you’ll dive deep into GDS algorithms—learning how to read documentation, configure settings, and model projections for specific analytical questions.