Data Engineering
Architecting data planes with Postgres, Redis, ElasticSearch, and Kafka. I believe in "Polyglot Persistence"—using the right data store for the specific access pattern.
Polyglot Persistence
Monolithic databases often become the bottleneck of scaling. Instead of forcing all data models into a single relational DB, I design systems where data lives in the store best suited for its workload: Relational for transactions, Document/Search for discovery, and Key-Value for speed.
This requires robust synchronization strategies. I rely heavily on Event Sourcing and Change Data Capture (CDC) patterns to propagate state across these distributed stores reliably.
The Data Stack
- PostgreSQL (Relational)The bedrock. ACID compliance for mission-critical relationships and transactions.
- Apache Kafka (Streaming)The central nervous system. Real-time event log for decoupling producers and consumers.
- ElasticSearch (Search)Full-text search, complex filtering, and aggregations that would choke a SQL DB.
- Redis (Cache)Sub-millisecond access for hot data, session management, and leaderboards.
Data Patterns
Event Sourcing
Storing the *changes* (events) as the source of truth, allowing us to replay history and rebuild state at any point.
CDC Pipelines
Using tools like Debezium to stream DB changes to Kafka, updating search indexes and caches in near-real-time.
Data Mesh
Treating data as a product. Domain teams own their data and expose it via standardized contracts.


