plpgsql support to come later. Along the way, convert execMain's
SELECT INTO support into a DestReceiver, in order to eliminate some ugly
special cases.
Jonah Harris and Tom Lane
of tuples when passing data up through multiple plan nodes. A slot can now
hold either a normal "physical" HeapTuple, or a "virtual" tuple consisting
of Datum/isnull arrays. Upper plan levels can usually just copy the Datum
arrays, avoiding heap_formtuple() and possible subsequent nocachegetattr()
calls to extract the data again. This work extends Atsushi Ogawa's earlier
patch, which provided the key idea of adding Datum arrays to TupleTableSlots.
(I believe however that something like this was foreseen way back in Berkeley
days --- see the old comment on ExecProject.) A test case involving many
levels of join of fairly wide tables (about 80 columns altogether) showed
about 3x overall speedup, though simple queries will probably not be
helped very much.
I have also duplicated some code in heaptuple.c in order to provide versions
of heap_formtuple and friends that use "bool" arrays to indicate null
attributes, instead of the old convention of "char" arrays containing either
'n' or ' '. This provides a better match to the convention used by
ExecEvalExpr. While I have not made a concerted effort to get rid of uses
of the old routines, I think they should be deprecated and eventually removed.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
DestReceiver pointers instead of just CommandDest values. The DestReceiver
is made at the point where the destination is selected, rather than
deep inside the executor. This cleans up the original kluge implementation
of tstoreReceiver.c, and makes it easy to support retrieving results
from utility statements inside portals. Thus, you can now do fun things
like Bind and Execute a FETCH or EXPLAIN command, and it'll all work
as expected (e.g., you can Describe the portal, or use Execute's count
parameter to suspend the output partway through). Implementation involves
stuffing the utility command's output into a Tuplestore, which would be
kind of annoying for huge output sets, but should be quite acceptable
for typical uses of utility commands.
the column by table OID and column number, if it's a simple column
reference. Along the way, get rid of reskey/reskeyop fields in Resdoms.
Turns out that representation was not convenient for either the planner
or the executor; we can make the planner deliver exactly what the
executor wants with no more effort.
initdb forced due to change in stored rule representation.
for tableID/columnID in RowDescription. (The latter isn't really
implemented yet though --- the backend always sends zeroes, and libpq
just throws away the data.)
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
that's selecting into a RECORD variable returns zero rows, make it
assign an all-nulls row to the RECORD; this is consistent with what
happens when the SELECT INTO target is not a RECORD. In support of
this, tweak the SPI code so that a valid tuple descriptor is returned
even when a SPI select returns no rows.
are now both invoked once per received SQL command (raw parsetree) from
pg_exec_query_string. BeginCommand is actually just an empty routine
at the moment --- all its former operations have been pushed into tuple
receiver setup routines in printtup.c. This makes for a clean distinction
between BeginCommand/EndCommand (once per command) and the tuple receiver
setup/teardown routines (once per ExecutorRun call), whereas the old code
was quite ad hoc. Along the way, clean up the calling conventions for
ExecutorRun a little bit.
report for each received SQL command, regardless of rewriting activity.
Also ensure that this report comes from the 'original' command, not the
last command generated by rewrite; this fixes 7.2 breakage for INSERT
commands that have actions added by rules. Fernando Nasser and Tom Lane.
* Makefile: Add more standard targets. Improve shell redirection in GNU
make detection.
* src/backend/access/transam/rmgr.c: Fix incorrect(?) C.
* src/backend/libpq/pqcomm.c (StreamConnection): Work around accept() bug.
* src/include/port/unixware.h: ...with help from here.
* src/backend/nodes/print.c (plannode_type): Remove some "break"s after
"return"s.
* src/backend/tcop/dest.c (DestToFunction): ditto.
* src/backend/nodes/readfuncs.c: Add proper prototypes.
* src/backend/utils/adt/numutils.c (pg_atoi): Cope specially with strtol()
setting EINVAL. This saves us from creating an extra set of regression test
output for the affected systems.
* src/include/storage/s_lock.h (tas): Correct prototype.
* src/interfaces/libpq/fe-connect.c (parseServiceInfo): Don't use variable
as dimension in array definition.
* src/makefiles/Makefile.unixware: Add support for GCC.
* src/template/unixware: same here
* src/test/regress/expected/abstime-solaris-1947.out: Adjust whitespace.
* src/test/regress/expected/horology-solaris-1947.out: Part of this file
was evidently missing.
* src/test/regress/pg_regress.sh: Fix shell. mkdir -p returns non-zero if
the directory exists.
* src/test/regress/resultmap: Add entries for Unixware.
backend functions via backend PQexec(). The SPI interface has long
been our only documented way to do this, and the backend pqexec/portal
code is unused and suffering bit-rot. I'm putting it out of its misery.
can be generated in a buffer and then sent to the frontend in a single
libpq call. This solves problems with NOTICE and ERROR messages generated
in the middle of a data message or COPY OUT operation.