Crunchy Data (@crunchydata) 's Twitter Profile
Crunchy Data

@crunchydata

Trusted Open Source PostgreSQL and Enterprise PostgreSQL Support, Technology and Training

ID: 1657873424

linkhttp://www.crunchydata.com calendar_today09-08-2013 14:26:12

3,3K Tweet

6,6K Followers

2,2K Following

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Our new logical replication from Postgres to Iceberg has been turning heads recently as folks realize how many options there are to connect operational databases with analytics using Postgres + Iceberg. We’ve noticed a couple trends since we released this feature last month so

Our new logical replication from Postgres to Iceberg has been turning heads recently as folks realize how many options there are to connect operational databases with analytics using Postgres + Iceberg. We’ve noticed a couple trends since we released this feature last month so
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Sometimes you just want to watch your Postgres logs in real time. Crunchy Bridge has a new screen for Live Log. It's blank when you show up but as your database logs activity, it will populate with the logs. This includes out of the box logs for warnings and queries, anything

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Coming in Postgres 18, showing buffers will be part of standard EXPLAIN output. For example: EXPLAIN ANALYZE SELECT * from orders; QUERY PLAN

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Looking for an archiving solution with long term retention for analytics? Craig Kerstiens - Finger lime evangelist has the recipe for success that combines Postgres partitioning with Iceberg replication. 1 - Partition your high throughput data - this is ideal for performance and management anyways. 2 -

Looking for an archiving solution with long term retention for analytics? <a href="/craigkerstiens/">Craig Kerstiens - Finger lime evangelist</a> has the recipe for success that combines Postgres partitioning with Iceberg replication.

1 - Partition your high throughput data - this is ideal for performance and management anyways.
2 -
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Choosing an index for a very large data set? Paul Ramsey digs on how to evaluate BRIN vs B-tree. tldr: The BRIN index can be a useful alternative to the BTree, for specific cases: - For tables with an "insert only" data pattern, and a correlated column - For use cases with very

Choosing an index for a very large data set? <a href="/pwramsey/">Paul Ramsey</a> digs on how to evaluate BRIN vs B-tree. 

tldr:
The BRIN index can be a useful alternative to the BTree, for specific cases:
- For tables with an "insert only" data pattern, and a correlated column
- For use cases with very
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Running fast queries against large analytics tables is one of the hallmark features of Crunchy Data Warehouse. In this video: - a Iceberg file is created from a flat file - data is queried and rolled up by day, taking 38ms - the same rollup is done with the same data in

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Working with money in Postgres 💰 ? We have a hand's on tutorial to show you why floats can be problematic and some functions to work with numeric data types with money. crunchydata.com/developers/pla…

Working with money in Postgres 💰 ? We have a hand's on tutorial to show you why floats can be problematic and some functions to work with numeric data types with money. 

crunchydata.com/developers/pla…
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Postgres tip of the day: Set a lock timeout. Set a lock_timeout for your application sessions so that the system will relinquish any locks it was holding after a certain period of time. This way, you won't have unexpected locks on tables for long periods of time blocking

Crunchy Data (@crunchydata) 's Twitter Profile Photo

PostGIS tip: Get faster spatial queries and maps with st_simplfy. ⚡ When you don't need exact topology and want to make queries faster and simpler, look at the ST_Simplify and ST_SimplifyPreserveTopology functions. These can be run with any spatial query with a scale factor to

Crunchy Data (@crunchydata) 's Twitter Profile Photo

This week Karen Jex will be at PyCon Italia (PyCon Italia) in Bologna talking about Postgres. She's giving a talk on table partitioning and best practices. 🐘 🐍

This week Karen Jex will be at PyCon Italia (<a href="/pyconit/">PyCon Italia</a>) in Bologna talking about Postgres.  She's giving a talk on table partitioning and best practices. 🐘 🐍
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Everyone wants nice metrics graphs showing CPU load, disk usage, and i/o ... but let's be real, no one wants to set it up or pay for it. We provide Crunchy Bridge metrics out of the box that are battle tested and reliable. There's no guesswork about if a special database

Everyone wants nice metrics graphs showing CPU load, disk usage, and i/o ... but let's be real, no one wants to set it up or pay for it. 

We provide Crunchy Bridge metrics out of the box that are battle tested and reliable. There's no guesswork about if a special database
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Postgres tip of the day: Covering Indexes Indexes allow both a creation of indexed columns and also additional INCLUDE columns. These include columns are not part of the index key but their data stored with the index so that some queries can return data exclusively from the

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Replication from Postgres --> Iceberg demo here. What you're seeing: - Running a big query is slow, so we'll put our data in Iceberg instead - Create some data with pg_bench and then sync it to Iceberg with replication - As replication works, see queries update in real time

Crunchy Data (@crunchydata) 's Twitter Profile Photo

Let’s talk about paginating Postgres data with LIMITS and OFFSETs. ▶️ First, why paginate? It is very common in modern applications to paginate through large sets of data. This way, the application is only displaying some of the data to an end user and they can decide if they

Crunchy Data (@crunchydata) 's Twitter Profile Photo

New blog today and we just love the simple approach here from Brandur. "Don't mock the database: Data fixtures are parallel safe, and plenty fast" Even though something might be a few milliseconds faster, that doesn’t mean it’s better. Test fixtures have great coverage and as

New blog today and we just love the simple approach here from <a href="/brandur/">Brandur</a>. 

"Don't mock the database: Data fixtures are parallel safe, and plenty fast"

Even though something might be a few milliseconds faster, that doesn’t mean it’s better. Test fixtures have great coverage and as
Crunchy Data (@crunchydata) 's Twitter Profile Photo

In Postgres version 15, after many years of anticipation, Postgres introduced the MERGE command. MERGE combines multiple data operations (INSERT, UPDATE, DELETE) into one atomic statement. Before merge these operations were done with INSERT ... ON CONFLICT or a SELECT + UPDATE

In Postgres version 15, after many years of anticipation, Postgres introduced the MERGE command. 

MERGE combines multiple data operations (INSERT, UPDATE, DELETE) into one atomic statement. Before merge these operations were done with INSERT ... ON CONFLICT or a SELECT + UPDATE
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Postgres folks love CTEs. One way that CTEs can be used - that is often underutilized - is using a CTE to do table updates, like moving data from one table to another. Or moving data from one table to several. And because your INSERT and DELETE statements were done in a CTE,

Postgres folks love CTEs. One way that CTEs can be used - that is often underutilized - is using a CTE to do table updates, like moving data from one table to another. Or moving data from one table to several. 

And because your INSERT and DELETE statements were done in a CTE,
Crunchy Data (@crunchydata) 's Twitter Profile Photo

Postgres term refresh: DDL and DML. As you dig through the Postgres docs or blogs, you're likely to come across these two terms. ◈ DDL: data definition language This includes anything done to create tables, columns, modify the underlying structure of the database. When your

Crunchy Data (@crunchydata) 's Twitter Profile Photo

If you start a new Postgres project, what is on your must have extension list for Day 1❓ Our internal chat last week had a good discussion on this and the consensus were these for almost every Postgres database: * pg_stat_statements * pgAudit * pg_cron * auto_explain

Crunchy Data (@crunchydata) 's Twitter Profile Photo

We are excited to announce that Crunchy Data is joining Snowflake to bring Postgres to the AI Data Cloud. 🎉 crunchydata.com/blog/crunchy-d…