|
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
|
*
|
|
|
|
|
* syscache.h
|
|
|
|
|
* System catalog cache definitions.
|
|
|
|
|
*
|
|
|
|
|
* See also lsyscache.h, which provides convenience routines for
|
|
|
|
|
* common cache-lookup operations.
|
|
|
|
|
*
|
|
|
|
|
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
|
|
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
|
*
|
|
|
|
|
* src/include/utils/syscache.h
|
|
|
|
|
*
|
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
|
*/
|
|
|
|
|
#ifndef SYSCACHE_H
|
|
|
|
|
#define SYSCACHE_H
|
|
|
|
|
|
|
|
|
|
#include "access/attnum.h"
|
|
|
|
|
#include "access/htup.h"
|
Use a safer method for determining whether relcache init file is stale.
When we invalidate the relcache entry for a system catalog or index, we
must also delete the relcache "init file" if the init file contains a copy
of that rel's entry. The old way of doing this relied on a specially
maintained list of the OIDs of relations present in the init file: we made
the list either when reading the file in, or when writing the file out.
The problem is that when writing the file out, we included only rels
present in our local relcache, which might have already suffered some
deletions due to relcache inval events. In such cases we correctly decided
not to overwrite the real init file with incomplete data --- but we still
used the incomplete initFileRelationIds list for the rest of the current
session. This could result in wrong decisions about whether the session's
own actions require deletion of the init file, potentially allowing an init
file created by some other concurrent session to be left around even though
it's been made stale.
Since we don't support changing the schema of a system catalog at runtime,
the only likely scenario in which this would cause a problem in the field
involves a "vacuum full" on a catalog concurrently with other activity, and
even then it's far from easy to provoke. Remarkably, this has been broken
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had
never seen a reproducible test case until recently. If it did happen in
the field, the symptoms would probably involve unexpected "cache lookup
failed" errors to begin with, then "could not open file" failures after the
next checkpoint, as all accesses to the affected catalog stopped working.
Recovery would require manually removing the stale "pg_internal.init" file.
To fix, get rid of the initFileRelationIds list, and instead consult
syscache.c's list of relations used in catalog caches to decide whether a
relation is included in the init file. This should be a tad more efficient
anyway, since we're replacing linear search of a list with ~100 entries
with a binary search. It's a bit ugly that the init file contents are now
so directly tied to the catalog caches, but in practice that won't make
much difference.
Back-patch to all supported branches.
11 years ago
|
|
|
/* we intentionally do not include utils/catcache.h here */
|
|
|
|
|
|
|
|
|
|
#include "catalog/syscache_ids.h" /* IWYU pragma: export */
|
|
|
|
|
|
|
|
|
|
extern void InitCatalogCache(void);
|
|
|
|
|
extern void InitCatalogCachePhase2(void);
|
|
|
|
|
|
|
|
|
|
extern HeapTuple SearchSysCache(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
Improve sys/catcache performance.
The following are the individual improvements:
1) Avoidance of FunctionCallInfo based function calls, replaced by
more efficient functions with a native C argument interface.
2) Don't extract columns from a cache entry's tuple whenever matching
entries - instead store them as a Datum array. This also allows to
get rid of having to build dummy tuples for negative & list
entries, and of a hack for dealing with cstring vs. text weirdness.
3) Reorder members of catcache.h struct, so imortant entries are more
likely to be on one cacheline.
4) Allowing the compiler to specialize critical SearchCatCache for a
specific number of attributes allows to unroll loops and avoid
other nkeys dependant initialization.
5) Only initializing the ScanKey when necessary, i.e. catcache misses,
greatly reduces cache unnecessary cpu cache misses.
6) Split of the cache-miss case from the hash lookup, reducing stack
allocations etc in the common case.
7) CatCTup and their corresponding heaptuple are allocated in one
piece.
This results in making cache lookups themselves roughly three times as
fast - full-system benchmarks obviously improve less than that.
I've also evaluated further techniques:
- replace open coded hash with simplehash - the list walk right now
shows up in profiles. Unfortunately it's not easy to do so safely as
an entry's memory location can change at various times, which
doesn't work well with the refcounting and cache invalidation.
- Cacheline-aligning CatCTup entries - helps some with performance,
but the win isn't big and the code for it is ugly, because the
tuples have to be freed as well.
- add more proper functions, rather than macros for
SearchSysCacheCopyN etc., but right now they don't show up in
profiles.
The reason the macro wrapper for syscache.c/h have to be changed,
rather than just catcache, is that doing otherwise would require
exposing the SysCache array to the outside. That might be a good idea
anyway, but it's for another day.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de
8 years ago
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* The use of argument specific numbers is encouraged. They're faster, and
|
|
|
|
|
* insulates the caller from changes in the maximum number of keys.
|
|
|
|
|
*/
|
|
|
|
|
extern HeapTuple SearchSysCache1(int cacheId,
|
|
|
|
|
Datum key1);
|
Improve sys/catcache performance.
The following are the individual improvements:
1) Avoidance of FunctionCallInfo based function calls, replaced by
more efficient functions with a native C argument interface.
2) Don't extract columns from a cache entry's tuple whenever matching
entries - instead store them as a Datum array. This also allows to
get rid of having to build dummy tuples for negative & list
entries, and of a hack for dealing with cstring vs. text weirdness.
3) Reorder members of catcache.h struct, so imortant entries are more
likely to be on one cacheline.
4) Allowing the compiler to specialize critical SearchCatCache for a
specific number of attributes allows to unroll loops and avoid
other nkeys dependant initialization.
5) Only initializing the ScanKey when necessary, i.e. catcache misses,
greatly reduces cache unnecessary cpu cache misses.
6) Split of the cache-miss case from the hash lookup, reducing stack
allocations etc in the common case.
7) CatCTup and their corresponding heaptuple are allocated in one
piece.
This results in making cache lookups themselves roughly three times as
fast - full-system benchmarks obviously improve less than that.
I've also evaluated further techniques:
- replace open coded hash with simplehash - the list walk right now
shows up in profiles. Unfortunately it's not easy to do so safely as
an entry's memory location can change at various times, which
doesn't work well with the refcounting and cache invalidation.
- Cacheline-aligning CatCTup entries - helps some with performance,
but the win isn't big and the code for it is ugly, because the
tuples have to be freed as well.
- add more proper functions, rather than macros for
SearchSysCacheCopyN etc., but right now they don't show up in
profiles.
The reason the macro wrapper for syscache.c/h have to be changed,
rather than just catcache, is that doing otherwise would require
exposing the SysCache array to the outside. That might be a good idea
anyway, but it's for another day.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de
8 years ago
|
|
|
extern HeapTuple SearchSysCache2(int cacheId,
|
|
|
|
|
Datum key1, Datum key2);
|
Improve sys/catcache performance.
The following are the individual improvements:
1) Avoidance of FunctionCallInfo based function calls, replaced by
more efficient functions with a native C argument interface.
2) Don't extract columns from a cache entry's tuple whenever matching
entries - instead store them as a Datum array. This also allows to
get rid of having to build dummy tuples for negative & list
entries, and of a hack for dealing with cstring vs. text weirdness.
3) Reorder members of catcache.h struct, so imortant entries are more
likely to be on one cacheline.
4) Allowing the compiler to specialize critical SearchCatCache for a
specific number of attributes allows to unroll loops and avoid
other nkeys dependant initialization.
5) Only initializing the ScanKey when necessary, i.e. catcache misses,
greatly reduces cache unnecessary cpu cache misses.
6) Split of the cache-miss case from the hash lookup, reducing stack
allocations etc in the common case.
7) CatCTup and their corresponding heaptuple are allocated in one
piece.
This results in making cache lookups themselves roughly three times as
fast - full-system benchmarks obviously improve less than that.
I've also evaluated further techniques:
- replace open coded hash with simplehash - the list walk right now
shows up in profiles. Unfortunately it's not easy to do so safely as
an entry's memory location can change at various times, which
doesn't work well with the refcounting and cache invalidation.
- Cacheline-aligning CatCTup entries - helps some with performance,
but the win isn't big and the code for it is ugly, because the
tuples have to be freed as well.
- add more proper functions, rather than macros for
SearchSysCacheCopyN etc., but right now they don't show up in
profiles.
The reason the macro wrapper for syscache.c/h have to be changed,
rather than just catcache, is that doing otherwise would require
exposing the SysCache array to the outside. That might be a good idea
anyway, but it's for another day.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de
8 years ago
|
|
|
extern HeapTuple SearchSysCache3(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3);
|
Improve sys/catcache performance.
The following are the individual improvements:
1) Avoidance of FunctionCallInfo based function calls, replaced by
more efficient functions with a native C argument interface.
2) Don't extract columns from a cache entry's tuple whenever matching
entries - instead store them as a Datum array. This also allows to
get rid of having to build dummy tuples for negative & list
entries, and of a hack for dealing with cstring vs. text weirdness.
3) Reorder members of catcache.h struct, so imortant entries are more
likely to be on one cacheline.
4) Allowing the compiler to specialize critical SearchCatCache for a
specific number of attributes allows to unroll loops and avoid
other nkeys dependant initialization.
5) Only initializing the ScanKey when necessary, i.e. catcache misses,
greatly reduces cache unnecessary cpu cache misses.
6) Split of the cache-miss case from the hash lookup, reducing stack
allocations etc in the common case.
7) CatCTup and their corresponding heaptuple are allocated in one
piece.
This results in making cache lookups themselves roughly three times as
fast - full-system benchmarks obviously improve less than that.
I've also evaluated further techniques:
- replace open coded hash with simplehash - the list walk right now
shows up in profiles. Unfortunately it's not easy to do so safely as
an entry's memory location can change at various times, which
doesn't work well with the refcounting and cache invalidation.
- Cacheline-aligning CatCTup entries - helps some with performance,
but the win isn't big and the code for it is ugly, because the
tuples have to be freed as well.
- add more proper functions, rather than macros for
SearchSysCacheCopyN etc., but right now they don't show up in
profiles.
The reason the macro wrapper for syscache.c/h have to be changed,
rather than just catcache, is that doing otherwise would require
exposing the SysCache array to the outside. That might be a good idea
anyway, but it's for another day.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de
8 years ago
|
|
|
extern HeapTuple SearchSysCache4(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
Improve sys/catcache performance.
The following are the individual improvements:
1) Avoidance of FunctionCallInfo based function calls, replaced by
more efficient functions with a native C argument interface.
2) Don't extract columns from a cache entry's tuple whenever matching
entries - instead store them as a Datum array. This also allows to
get rid of having to build dummy tuples for negative & list
entries, and of a hack for dealing with cstring vs. text weirdness.
3) Reorder members of catcache.h struct, so imortant entries are more
likely to be on one cacheline.
4) Allowing the compiler to specialize critical SearchCatCache for a
specific number of attributes allows to unroll loops and avoid
other nkeys dependant initialization.
5) Only initializing the ScanKey when necessary, i.e. catcache misses,
greatly reduces cache unnecessary cpu cache misses.
6) Split of the cache-miss case from the hash lookup, reducing stack
allocations etc in the common case.
7) CatCTup and their corresponding heaptuple are allocated in one
piece.
This results in making cache lookups themselves roughly three times as
fast - full-system benchmarks obviously improve less than that.
I've also evaluated further techniques:
- replace open coded hash with simplehash - the list walk right now
shows up in profiles. Unfortunately it's not easy to do so safely as
an entry's memory location can change at various times, which
doesn't work well with the refcounting and cache invalidation.
- Cacheline-aligning CatCTup entries - helps some with performance,
but the win isn't big and the code for it is ugly, because the
tuples have to be freed as well.
- add more proper functions, rather than macros for
SearchSysCacheCopyN etc., but right now they don't show up in
profiles.
The reason the macro wrapper for syscache.c/h have to be changed,
rather than just catcache, is that doing otherwise would require
exposing the SysCache array to the outside. That might be a good idea
anyway, but it's for another day.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de
8 years ago
|
|
|
|
|
|
|
|
extern void ReleaseSysCache(HeapTuple tuple);
|
|
|
|
|
|
|
|
|
|
extern HeapTuple SearchSysCacheLocked1(int cacheId,
|
|
|
|
|
Datum key1);
|
|
|
|
|
|
|
|
|
|
/* convenience routines */
|
|
|
|
|
extern HeapTuple SearchSysCacheCopy(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
|
|
|
|
extern HeapTuple SearchSysCacheLockedCopy1(int cacheId,
|
|
|
|
|
Datum key1);
|
|
|
|
|
extern bool SearchSysCacheExists(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
|
|
|
|
extern Oid GetSysCacheOid(int cacheId, AttrNumber oidcol,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
|
|
|
|
|
|
|
|
|
extern HeapTuple SearchSysCacheAttName(Oid relid, const char *attname);
|
|
|
|
|
extern HeapTuple SearchSysCacheCopyAttName(Oid relid, const char *attname);
|
|
|
|
|
extern bool SearchSysCacheExistsAttName(Oid relid, const char *attname);
|
|
|
|
|
|
|
|
|
|
extern HeapTuple SearchSysCacheAttNum(Oid relid, int16 attnum);
|
|
|
|
|
extern HeapTuple SearchSysCacheCopyAttNum(Oid relid, int16 attnum);
|
|
|
|
|
|
|
|
|
|
extern Datum SysCacheGetAttr(int cacheId, HeapTuple tup,
|
|
|
|
|
AttrNumber attributeNumber, bool *isNull);
|
|
|
|
|
|
|
|
|
|
extern Datum SysCacheGetAttrNotNull(int cacheId, HeapTuple tup,
|
|
|
|
|
AttrNumber attributeNumber);
|
|
|
|
|
|
|
|
|
|
extern uint32 GetSysCacheHashValue(int cacheId,
|
|
|
|
|
Datum key1, Datum key2, Datum key3, Datum key4);
|
|
|
|
|
|
|
|
|
|
/* list-search interface. Users of this must import catcache.h too */
|
|
|
|
|
struct catclist;
|
|
|
|
|
extern struct catclist *SearchSysCacheList(int cacheId, int nkeys,
|
|
|
|
|
Datum key1, Datum key2, Datum key3);
|
|
|
|
|
|
|
|
|
|
extern void SysCacheInvalidate(int cacheId, uint32 hashValue);
|
|
|
|
|
|
Use a safer method for determining whether relcache init file is stale.
When we invalidate the relcache entry for a system catalog or index, we
must also delete the relcache "init file" if the init file contains a copy
of that rel's entry. The old way of doing this relied on a specially
maintained list of the OIDs of relations present in the init file: we made
the list either when reading the file in, or when writing the file out.
The problem is that when writing the file out, we included only rels
present in our local relcache, which might have already suffered some
deletions due to relcache inval events. In such cases we correctly decided
not to overwrite the real init file with incomplete data --- but we still
used the incomplete initFileRelationIds list for the rest of the current
session. This could result in wrong decisions about whether the session's
own actions require deletion of the init file, potentially allowing an init
file created by some other concurrent session to be left around even though
it's been made stale.
Since we don't support changing the schema of a system catalog at runtime,
the only likely scenario in which this would cause a problem in the field
involves a "vacuum full" on a catalog concurrently with other activity, and
even then it's far from easy to provoke. Remarkably, this has been broken
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had
never seen a reproducible test case until recently. If it did happen in
the field, the symptoms would probably involve unexpected "cache lookup
failed" errors to begin with, then "could not open file" failures after the
next checkpoint, as all accesses to the affected catalog stopped working.
Recovery would require manually removing the stale "pg_internal.init" file.
To fix, get rid of the initFileRelationIds list, and instead consult
syscache.c's list of relations used in catalog caches to decide whether a
relation is included in the init file. This should be a tad more efficient
anyway, since we're replacing linear search of a list with ~100 entries
with a binary search. It's a bit ugly that the init file contents are now
so directly tied to the catalog caches, but in practice that won't make
much difference.
Back-patch to all supported branches.
11 years ago
|
|
|
extern bool RelationInvalidatesSnapshotsOnly(Oid relid);
|
|
|
|
|
extern bool RelationHasSysCache(Oid relid);
|
|
|
|
|
extern bool RelationSupportsSysCache(Oid relid);
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
13 years ago
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* The use of the macros below rather than direct calls to the corresponding
|
|
|
|
|
* functions is encouraged, as it insulates the caller from changes in the
|
|
|
|
|
* maximum number of keys.
|
|
|
|
|
*/
|
|
|
|
|
#define SearchSysCacheCopy1(cacheId, key1) \
|
|
|
|
|
SearchSysCacheCopy(cacheId, key1, 0, 0, 0)
|
|
|
|
|
#define SearchSysCacheCopy2(cacheId, key1, key2) \
|
|
|
|
|
SearchSysCacheCopy(cacheId, key1, key2, 0, 0)
|
|
|
|
|
#define SearchSysCacheCopy3(cacheId, key1, key2, key3) \
|
|
|
|
|
SearchSysCacheCopy(cacheId, key1, key2, key3, 0)
|
|
|
|
|
#define SearchSysCacheCopy4(cacheId, key1, key2, key3, key4) \
|
|
|
|
|
SearchSysCacheCopy(cacheId, key1, key2, key3, key4)
|
|
|
|
|
|
|
|
|
|
#define SearchSysCacheExists1(cacheId, key1) \
|
|
|
|
|
SearchSysCacheExists(cacheId, key1, 0, 0, 0)
|
|
|
|
|
#define SearchSysCacheExists2(cacheId, key1, key2) \
|
|
|
|
|
SearchSysCacheExists(cacheId, key1, key2, 0, 0)
|
|
|
|
|
#define SearchSysCacheExists3(cacheId, key1, key2, key3) \
|
|
|
|
|
SearchSysCacheExists(cacheId, key1, key2, key3, 0)
|
|
|
|
|
#define SearchSysCacheExists4(cacheId, key1, key2, key3, key4) \
|
|
|
|
|
SearchSysCacheExists(cacheId, key1, key2, key3, key4)
|
|
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
7 years ago
|
|
|
#define GetSysCacheOid1(cacheId, oidcol, key1) \
|
|
|
|
|
GetSysCacheOid(cacheId, oidcol, key1, 0, 0, 0)
|
|
|
|
|
#define GetSysCacheOid2(cacheId, oidcol, key1, key2) \
|
|
|
|
|
GetSysCacheOid(cacheId, oidcol, key1, key2, 0, 0)
|
|
|
|
|
#define GetSysCacheOid3(cacheId, oidcol, key1, key2, key3) \
|
|
|
|
|
GetSysCacheOid(cacheId, oidcol, key1, key2, key3, 0)
|
|
|
|
|
#define GetSysCacheOid4(cacheId, oidcol, key1, key2, key3, key4) \
|
|
|
|
|
GetSysCacheOid(cacheId, oidcol, key1, key2, key3, key4)
|
|
|
|
|
|
|
|
|
|
#define GetSysCacheHashValue1(cacheId, key1) \
|
|
|
|
|
GetSysCacheHashValue(cacheId, key1, 0, 0, 0)
|
|
|
|
|
#define GetSysCacheHashValue2(cacheId, key1, key2) \
|
|
|
|
|
GetSysCacheHashValue(cacheId, key1, key2, 0, 0)
|
|
|
|
|
#define GetSysCacheHashValue3(cacheId, key1, key2, key3) \
|
|
|
|
|
GetSysCacheHashValue(cacheId, key1, key2, key3, 0)
|
|
|
|
|
#define GetSysCacheHashValue4(cacheId, key1, key2, key3, key4) \
|
|
|
|
|
GetSysCacheHashValue(cacheId, key1, key2, key3, key4)
|
|
|
|
|
|
|
|
|
|
#define SearchSysCacheList1(cacheId, key1) \
|
|
|
|
|
SearchSysCacheList(cacheId, 1, key1, 0, 0)
|
|
|
|
|
#define SearchSysCacheList2(cacheId, key1, key2) \
|
|
|
|
|
SearchSysCacheList(cacheId, 2, key1, key2, 0)
|
|
|
|
|
#define SearchSysCacheList3(cacheId, key1, key2, key3) \
|
|
|
|
|
SearchSysCacheList(cacheId, 3, key1, key2, key3)
|
|
|
|
|
|
|
|
|
|
#define ReleaseSysCacheList(x) ReleaseCatCacheList(x)
|
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
9 years ago
|
|
|
#endif /* SYSCACHE_H */
|