If a PL/Python function raises an SPIError (or one if its subclasses)
directly with python's raise statement, treat it the same as an SPIError
generated internally. In particular, if the user sets the sqlstate
attribute, preserve that.
Oskari Saarenmaa and Jan Urbański, reviewed by Karl O. Pinc.
plpython tried to use a single cache entry for a trigger function, but it
needs a separate cache entry for each table the trigger is applied to,
because there is table-dependent data in there. This was done correctly
before 9.1, but commit 46211da1b8 broke it
by simplifying the lookup key from "function OID and triggered table OID"
to "function OID and is-trigger boolean". Go back to using both OIDs
as the lookup key. Per bug report from Sandro Santilli.
Andres Freund
Commit 2cfb1c6f77 fixed some issues caused
by Python 3.3 choosing to iterate through dict entries in a different order
than before. But here's another one: the test cases adjusted here made two
bad entries in a dict and expected the one complained of would always be
the same.
Possibly this should be back-patched further than 9.2, but there seems
little point unless the earlier fix is too.
We used to convert the unicode object directly to a string in the server
encoding by calling Python's PyUnicode_AsEncodedString function. In other
words, we used Python's routines to do the encoding. However, that has a
few problems. First of all, it required keeping a mapping table of Python
encoding names and PostgreSQL encodings. But the real killer was that Python
doesn't support EUC_TW and MULE_INTERNAL encodings at all.
Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's
encoding conversion functions to convert from UTF-8 to server encoding. We
were already doing the same in the other direction in PLyUnicode_FromString,
so this is more consistent, too.
Note: This makes SQL_ASCII to behave more leniently. We used to map
SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII
only, so you got an error if the python string contained anything but pure
ASCII. You no longer get an error; you get the UTF-8 representation of the
string instead.
Backpatch to 9.0, where these conversions were introduced.
Jan Urbański
The string representation of ImportError changed. Remove printing
that; it's not necessary for the test.
The order in which members of a dict are printed changed. But this
was always implementation-dependent, so we have just been lucky for a
long time. Do the printing the hard way to ensure sorted order.
The old way of implementing slicing support by implementing
PySequenceMethods.sq_slice no longer works in Python 3. You now have
to implement PyMappingMethods.mp_subscript. Do this by simply
proxying the call to the wrapped list of result dictionaries.
Consolidate some of the subscripting regression tests.
Jan Urbański
It was already on its last legs, and it turns out that it was
accidentally broken in commit 89e850e6fd
and no one cared. So remove the rest the support for it and update
the documentation to indicate that Python 2.3 is now required.
Add test cases for inline handler of plython2u (when using that
language name), and for result object element assignment. There is
now at least one test case for every top-level functionality, except
plpy.Fatal (annoying to use in regression tests) and result object
slice retrieval and slice assignment (which are somewhat broken).
Allocate PLyResultObject.tupdesc in TopMemoryContext, because its
lifetime is the lifetime of the Python object and it shouldn't be
freed by some other memory context, such as one controlled by SPI. We
trust that the Python object will clean up its own memory.
Before, this would crash the included regression test case by trying
to use memory that was already freed.
reported by Asif Naeem, analysis by Tom Lane
Before 9.1, PL/Python functions returning composite types could return
a string and it would be parsed using record_in. The 9.1 changes made
PL/Python only expect dictionaries, tuples, or objects supporting
getattr as output of composite functions, resulting in a regression
and a confusing error message, as the strings were interpreted as
sequences and the code for transforming lists to database tuples was
used. Fix this by treating strings separately as before, before
checking for the other types.
The reason why it's important to support string to database tuple
conversion is that trigger functions on tables with composite columns
get the composite row passed in as a string (from record_out).
Without supporting converting this back using record_in, this makes it
impossible to implement pass-through behavior for these columns, as
PL/Python no longer accepts strings for composite values.
A better solution would be to fix the code that transforms composite
inputs into Python objects to produce dictionaries that would then be
correctly interpreted by the Python->PostgreSQL counterpart code. But
that would be too invasive to backpatch to 9.1, and it is too late in
the 9.2 cycle to attempt it. It should be revisited in the future,
though.
Reported as bug #6559 by Kirill Simonov.
Jan Urbański
The result object methods colnames() etc. would crash when called
after a command that did not produce a result set. Now they throw an
exception.
discovery and initial patch by Jean-Baptiste Quenot
Don't quote the output of format_procedure(); it's already quoted quite
enough. Remove the fn_name field, which was now just dead weight. Fix
remaining expected-output files.
Add result object functions .colnames, .coltypes, .coltypmods to
obtain information about the result column names and types, which was
previously not possible in the PL/Python SPI interface.
reviewed by Abhijit Menon-Sen
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.
Jan Urbański, reviewed by Steve Singer
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.
Mika Eloranta, with some editing by Jan Urbański.
Dropped columns within a composite type were not handled correctly.
Also, we did not check for whether a composite result type had changed
since we cached the information about it.
Jan Urbański, per a bug report from Jean-Baptiste Quenot
This mostly just involves creating control, install, and
update-from-unpackaged scripts for them. However, I had to adjust plperl
and plpython to not share the same support functions between variants,
because we can't put the same function into multiple extensions.
catversion bump forced due to new contents of pg_pltemplate, and because
initdb now installs plpgsql as an extension not a bare language.
Add support for regression testing these as extensions not bare
languages.
Fix a couple of other issues that popped up while testing this: my initial
hack at pg_dump binary-upgrade support didn't work right, and we don't want
an extra schema permissions test after all.
Documentation changes still to come, but I'm committing now to see
whether the MSVC build scripts need work (likely they do).
This provides a separate exception class for each error code that the
backend defines, as well as the ability to get the SQLSTATE from the
exception object.
Jan Urbański, reviewed by Steve Singer
Adds a context manager, obtainable by plpy.subtransaction(), to run a
group of statements in a subtransaction.
Jan Urbański, reviewed by Steve Singer, additional scribbling by me
We don't have complete expected coverage for Python 2.2 anyway, so it
doesn't seem worth keeping this one around that no one appears to be
updating anyway. Visual inspection of the differences ought to be
good enough for those few who care about this obsolete Python version.
This allows functions with multiple OUT parameters returning both one
or multiple records (RECORD or SETOF RECORD).
Jan Urbański, reviewed by Hitoshi Harada
Add functions plpy.quote_ident, plpy.quote_literal,
plpy.quote_nullable, which wrap the equivalent SQL functions.
To be able to propagate char * constness properly, make the argument
of quote_literal_cstr() const char *. This also makes it more
consistent with quote_identifier().
Jan Urbański, reviewed by Hitoshi Harada, some refinements by Peter
Eisentraut
This allows the language-specific try/catch construct to catch and
handle exceptions arising from SPI calls, matching the behavior of
other PLs.
As an additional bonus you no longer get all the ugly "unrecognized
error in PLy_spi_execute_query" errors.
Jan Urbański, reviewed by Steve Singer
Use the built-in TypeError, not SPIError, for errors having to do with
argument counts or types. Use SPIError, not simply plpy.Error, for
errors in PLy_spi_execute_plan. Finally, do not set a Python
exception if PyArg_ParseTuple failed, as it already sets the correct
exception.
Jan Urbański
It's not clear to me what should happen to the other plpython_unicode
variant expected files, but this patch gets things passing on my own
machines and at least some of the buildfarm.
Global error handling led to confusion and was hard to manage. With
this change, errors from PostgreSQL are immediately reported to Python
as exceptions. This requires setting a Python exception after
reporting the caught PostgreSQL error as a warning, because PLy_elog
destroys the Python exception state.
Ideally, all places where PostgreSQL errors need to be reported back
to Python should be wrapped in subtransactions, to make going back to
Python from a longjmp safe. This will be handled in a separate patch.
Jan Urbański
The way the exception types where added to the module was wrong for
Python 3. Exception classes were not actually available from plpy.
Fix that by factoring out code that is responsible for defining new
Python exceptions and make it work with Python 3. New regression test
makes sure the plpy module has the expected contents.
Jan Urbanśki, slightly revised by me
We must stay in the function's SPI context until done calling the iterator
that returns the set result. Otherwise, any attempt to invoke SPI features
in the python code called by the iterator will malfunction. Diagnosis and
patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot.
Back-patch to 8.2; there was no support for SRFs in previous versions of
plpython.
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
In PLy_spi_execute_plan, use the data-type specific Python-to-PostgreSQL
conversion function instead of passing everything through InputFunctionCall
as a string. The equivalent fix was already done months ago for function
parameters and return values, but this other gateway between Python and
PostgreSQL was apparently forgotten. As a result, data types that need
special treatment, such as bytea, would misbehave when used with
plpy.execute.