How does store-and-forward work in a secondary (remote) gateway?

I want to use my primary postgres database as the source of truth and repository for data coming from a secondary gateway. I set up remote tag providers which I can access, but I seem to not be able to access their store-and-forward entries from the Tag Provider interface. I had to do a full remote connection to the postgres database before I could send anything that way, and it seems to mostly be working, but I am not clear on how the caching mechanisms work. Do I need to deploy a local postgres instance for this? I thought it would cache in a local sqlite database and send when possible, but it is not clear. I have read the documentation, and honestly I don't feel like I understand how to use it any better than before I read it. Is there a more detailed guide somewhere?

I would also like to have access to recent historical data (for monitor plots) without doing remote calls to the database that will be outside the physical building that the new gateway will be living in. Is this something that is a built-in option or is this something that I need to generate myself?

Store and forward is between a history source, typically the tag with history enabled, and the configured history sink, either a data base with a normal historian or a remote historian/sync services.

Remote tag providers, looking back at the origin server, have the option to divert queries to a historian at the receiving server. Or you can construct tag history qualified paths that do similar.

The S&F caches are not between gateways, but only within a source gateway.

Thanks. So it's effectively not related to any attempt I make to have local caching for data? For instance I've got the same providers configured "locally" on each gateway but I have the secondary gateway providers mounted as remote providers on the primary gateway like [{gateway}-{local provider}]. I am currently using a tag provider on the secondary to write to the same database as the primary, but maybe I should be using the remote tags to write locally on the primary and have a new database on the secondary gateway machine that is locally caching?

This is the crux. You can use a history splitter and simultaneous log history to two databases.

IMNSHO, the right answer is to replicate the origin database to the other site (outside of Ignition) and point that site's history queries at the replica.

Ok -- we're going to set up a secondary postgres database on the secondary gateway machine and then replicate it into our primary postgres server. Then I can mount the secondary database on my primary gateway so that I can view data there. In the meantime, I will do all my historian action on the secondary gateway pointed at the secondary database so that everything there is locally accessible.

phew