You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
postgres/src/pl/plpython/plpy_main.c

418 lines
11 KiB

/*
* PL/Python main entry points
*
* src/pl/plpython/plpy_main.c
*/
#include "postgres.h"
#include "access/htup_details.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_type.h"
#include "commands/trigger.h"
#include "executor/spi.h"
#include "miscadmin.h"
#include "plpy_elog.h"
#include "plpy_exec.h"
#include "plpy_main.h"
#include "plpy_plpymodule.h"
#include "plpy_procedure.h"
#include "plpy_subxactobject.h"
#include "plpython.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "utils/syscache.h"
/*
* exported functions
*/
PG_MODULE_MAGIC;
PG_FUNCTION_INFO_V1(plpython3_validator);
PG_FUNCTION_INFO_V1(plpython3_call_handler);
PG_FUNCTION_INFO_V1(plpython3_inline_handler);
static bool PLy_procedure_is_trigger(Form_pg_proc procStruct);
static void plpython_error_callback(void *arg);
static void plpython_inline_error_callback(void *arg);
static void PLy_init_interp(void);
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
static PLyExecutionContext *PLy_push_execution_context(bool atomic_context);
static void PLy_pop_execution_context(void);
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
/* static state for Python library conflict detection */
static int *plpython_version_bitmask_ptr = NULL;
static int plpython_version_bitmask = 0;
/* initialize global variables */
PyObject *PLy_interp_globals = NULL;
/* this doesn't need to be global; use PLy_current_execution_context() */
static PLyExecutionContext *PLy_execution_contexts = NULL;
void
_PG_init(void)
{
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
int **bitmask_ptr;
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
/*
* Set up a shared bitmask variable telling which Python version(s) are
* loaded into this process's address space. If there's more than one, we
* cannot call into libpython for fear of causing crashes. But postpone
* the actual failure for later, so that operations like pg_restore can
* load more than one plpython library so long as they don't try to do
* anything much with the language.
*
* While we only support Python 3 these days, somebody might create an
* out-of-tree version adding back support for Python 2. Conflicts with
* such an extension should be detected.
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
*/
bitmask_ptr = (int **) find_rendezvous_variable("plpython_version_bitmask");
if (!(*bitmask_ptr)) /* am I the first? */
*bitmask_ptr = &plpython_version_bitmask;
/* Retain pointer to the agreed-on shared variable ... */
plpython_version_bitmask_ptr = *bitmask_ptr;
/* ... and announce my presence */
*plpython_version_bitmask_ptr |= (1 << PY_MAJOR_VERSION);
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
/*
* This should be safe even in the presence of conflicting plpythons, and
* it's necessary to do it before possibly throwing a conflict error, or
* the error message won't get localized.
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
*/
pg_bindtextdomain(TEXTDOMAIN);
}
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
/*
* Perform one-time setup of PL/Python, after checking for a conflict
* with other versions of Python.
*/
static void
PLy_initialize(void)
{
static bool inited = false;
/*
* Check for multiple Python libraries before actively doing anything with
* libpython. This must be repeated on each entry to PL/Python, in case a
* conflicting library got loaded since we last looked.
*
* It is attractive to weaken this error from FATAL to ERROR, but there
* would be corner cases, so it seems best to be conservative.
*/
if (*plpython_version_bitmask_ptr != (1 << PY_MAJOR_VERSION))
ereport(FATAL,
(errmsg("multiple Python libraries are present in session"),
errdetail("Only one Python major version can be used in one session.")));
/* The rest should only be done once per session */
if (inited)
return;
PyImport_AppendInittab("plpy", PyInit_plpy);
Py_Initialize();
PyImport_ImportModule("plpy");
PLy_init_interp();
PLy_init_plpy();
if (PyErr_Occurred())
PLy_elog(FATAL, "untrapped error in initialization");
init_procedure_caches();
explicit_subtransactions = NIL;
PLy_execution_contexts = NULL;
inited = true;
}
/*
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
* This should be called only once, from PLy_initialize. Initialize the Python
* interpreter and global data.
*/
static void
PLy_init_interp(void)
{
static PyObject *PLy_interp_safe_globals = NULL;
PyObject *mainmod;
mainmod = PyImport_AddModule("__main__");
if (mainmod == NULL || PyErr_Occurred())
PLy_elog(ERROR, "could not import \"__main__\" module");
Py_INCREF(mainmod);
PLy_interp_globals = PyModule_GetDict(mainmod);
PLy_interp_safe_globals = PyDict_New();
if (PLy_interp_safe_globals == NULL)
PLy_elog(ERROR, NULL);
PyDict_SetItemString(PLy_interp_globals, "GD", PLy_interp_safe_globals);
Py_DECREF(mainmod);
if (PLy_interp_globals == NULL || PyErr_Occurred())
PLy_elog(ERROR, "could not initialize globals");
}
Datum
plpython3_validator(PG_FUNCTION_ARGS)
{
Oid funcoid = PG_GETARG_OID(0);
HeapTuple tuple;
Form_pg_proc procStruct;
bool is_trigger;
if (!CheckFunctionValidatorAccess(fcinfo->flinfo->fn_oid, funcoid))
PG_RETURN_VOID();
if (!check_function_bodies)
PG_RETURN_VOID();
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
/* Do this only after making sure we need to do something */
PLy_initialize();
/* Get the new function's pg_proc entry */
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcoid));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for function %u", funcoid);
procStruct = (Form_pg_proc) GETSTRUCT(tuple);
is_trigger = PLy_procedure_is_trigger(procStruct);
ReleaseSysCache(tuple);
/* We can't validate triggers against any particular table ... */
PLy_procedure_get(funcoid, InvalidOid, is_trigger);
PG_RETURN_VOID();
}
Datum
plpython3_call_handler(PG_FUNCTION_ARGS)
{
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
bool nonatomic;
Datum retval;
PLyExecutionContext *exec_ctx;
ErrorContextCallback plerrcontext;
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
PLy_initialize();
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
nonatomic = fcinfo->context &&
IsA(fcinfo->context, CallContext) &&
!castNode(CallContext, fcinfo->context)->atomic;
/* Note: SPI_finish() happens in plpy_exec.c, which is dubious design */
SPI_connect_ext(nonatomic ? SPI_OPT_NONATOMIC : 0);
/*
* Push execution context onto stack. It is important that this get
* popped again, so avoid putting anything that could throw error between
* here and the PG_TRY.
*/
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
exec_ctx = PLy_push_execution_context(!nonatomic);
PG_TRY();
{
Oid funcoid = fcinfo->flinfo->fn_oid;
PLyProcedure *proc;
/*
* Setup error traceback support for ereport(). Note that the PG_TRY
* structure pops this for us again at exit, so we needn't do that
* explicitly, nor do we risk the callback getting called after we've
* destroyed the exec_ctx.
*/
plerrcontext.callback = plpython_error_callback;
plerrcontext.arg = exec_ctx;
plerrcontext.previous = error_context_stack;
error_context_stack = &plerrcontext;
if (CALLED_AS_TRIGGER(fcinfo))
{
Relation tgrel = ((TriggerData *) fcinfo->context)->tg_relation;
HeapTuple trv;
proc = PLy_procedure_get(funcoid, RelationGetRelid(tgrel), true);
exec_ctx->curr_proc = proc;
trv = PLy_exec_trigger(fcinfo, proc);
retval = PointerGetDatum(trv);
}
else
{
proc = PLy_procedure_get(funcoid, InvalidOid, false);
exec_ctx->curr_proc = proc;
retval = PLy_exec_function(fcinfo, proc);
}
}
PG_CATCH();
{
PLy_pop_execution_context();
PyErr_Clear();
PG_RE_THROW();
}
PG_END_TRY();
/* Destroy the execution context */
PLy_pop_execution_context();
return retval;
}
Datum
plpython3_inline_handler(PG_FUNCTION_ARGS)
{
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
7 years ago
LOCAL_FCINFO(fake_fcinfo, 0);
InlineCodeBlock *codeblock = (InlineCodeBlock *) DatumGetPointer(PG_GETARG_DATUM(0));
FmgrInfo flinfo;
PLyProcedure proc;
PLyExecutionContext *exec_ctx;
ErrorContextCallback plerrcontext;
Avoid dump/reload problems when using both plpython2 and plpython3. Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
10 years ago
PLy_initialize();
/* Note: SPI_finish() happens in plpy_exec.c, which is dubious design */
SPI_connect_ext(codeblock->atomic ? 0 : SPI_OPT_NONATOMIC);
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
7 years ago
MemSet(fcinfo, 0, SizeForFunctionCallInfo(0));
MemSet(&flinfo, 0, sizeof(flinfo));
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
7 years ago
fake_fcinfo->flinfo = &flinfo;
flinfo.fn_oid = InvalidOid;
flinfo.fn_mcxt = CurrentMemoryContext;
MemSet(&proc, 0, sizeof(PLyProcedure));
proc.mcxt = AllocSetContextCreate(TopMemoryContext,
"__plpython_inline_block",
Add macros to make AllocSetContextCreate() calls simpler and safer. I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <21072.1472321324@sss.pgh.pa.us>
9 years ago
ALLOCSET_DEFAULT_SIZES);
proc.pyname = MemoryContextStrdup(proc.mcxt, "__plpython_inline_block");
proc.langid = codeblock->langOid;
Make PL/Python handle domain-type conversions correctly. Fix PL/Python so that it can handle domains over composite, and so that it enforces domain constraints correctly in other cases that were not always done properly before. Notably, it didn't do arrays of domains right (oversight in commit c12d570fa), and it failed to enforce domain constraints when returning a composite type containing a domain field, and if a transform function is being used for a domain's base type then it failed to enforce domain constraints on the result. Also, in many places it missed checking domain constraints on null values, because the plpy_typeio code simply wasn't called for Py_None. Rather than try to band-aid these problems, I made a significant refactoring of the plpy_typeio logic. The existing design of recursing for array and composite members is extended to also treat domains as containers requiring recursion, and the APIs for the module are cleaned up and simplified. The patch also modifies plpy_typeio to rely on the typcache more than it did before (which was pretty much not at all). This reduces the need for repetitive lookups, and lets us get rid of an ad-hoc scheme for detecting changes in composite types. I added a couple of small features to typcache to help with that. Although some of this is fixing bugs that long predate v11, I don't think we should risk a back-patch: it's a significant amount of code churn, and there've been no complaints from the field about the bugs. Tom Lane, reviewed by Anthony Bykov Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
8 years ago
/*
* This is currently sufficient to get PLy_exec_function to work, but
* someday we might need to be honest and use PLy_output_setup_func.
*/
proc.result.typoid = VOIDOID;
/*
* Push execution context onto stack. It is important that this get
* popped again, so avoid putting anything that could throw error between
* here and the PG_TRY.
*/
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
exec_ctx = PLy_push_execution_context(codeblock->atomic);
PG_TRY();
{
/*
* Setup error traceback support for ereport().
* plpython_inline_error_callback doesn't currently need exec_ctx, but
* for consistency with plpython3_call_handler we do it the same way.
*/
plerrcontext.callback = plpython_inline_error_callback;
plerrcontext.arg = exec_ctx;
plerrcontext.previous = error_context_stack;
error_context_stack = &plerrcontext;
PLy_procedure_compile(&proc, codeblock->source_text);
exec_ctx->curr_proc = &proc;
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
7 years ago
PLy_exec_function(fake_fcinfo, &proc);
}
PG_CATCH();
{
PLy_pop_execution_context();
PLy_procedure_delete(&proc);
PyErr_Clear();
PG_RE_THROW();
}
PG_END_TRY();
/* Destroy the execution context */
PLy_pop_execution_context();
/* Now clean up the transient procedure we made */
PLy_procedure_delete(&proc);
PG_RETURN_VOID();
}
static bool
PLy_procedure_is_trigger(Form_pg_proc procStruct)
{
return (procStruct->prorettype == TRIGGEROID);
}
static void
plpython_error_callback(void *arg)
{
PLyExecutionContext *exec_ctx = (PLyExecutionContext *) arg;
if (exec_ctx->curr_proc)
{
if (exec_ctx->curr_proc->is_procedure)
errcontext("PL/Python procedure \"%s\"",
PLy_procedure_name(exec_ctx->curr_proc));
else
errcontext("PL/Python function \"%s\"",
PLy_procedure_name(exec_ctx->curr_proc));
}
}
static void
plpython_inline_error_callback(void *arg)
{
errcontext("PL/Python anonymous code block");
}
PLyExecutionContext *
PLy_current_execution_context(void)
{
if (PLy_execution_contexts == NULL)
elog(ERROR, "no Python function is currently executing");
return PLy_execution_contexts;
}
MemoryContext
PLy_get_scratch_context(PLyExecutionContext *context)
{
/*
* A scratch context might never be needed in a given plpython procedure,
* so allocate it on first request.
*/
if (context->scratch_ctx == NULL)
context->scratch_ctx =
AllocSetContextCreate(TopTransactionContext,
"PL/Python scratch context",
Add macros to make AllocSetContextCreate() calls simpler and safer. I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <21072.1472321324@sss.pgh.pa.us>
9 years ago
ALLOCSET_DEFAULT_SIZES);
return context->scratch_ctx;
}
static PLyExecutionContext *
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
PLy_push_execution_context(bool atomic_context)
{
PLyExecutionContext *context;
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
/* Pick a memory context similar to what SPI uses. */
context = (PLyExecutionContext *)
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
8 years ago
MemoryContextAlloc(atomic_context ? TopTransactionContext : PortalContext,
sizeof(PLyExecutionContext));
context->curr_proc = NULL;
context->scratch_ctx = NULL;
context->next = PLy_execution_contexts;
PLy_execution_contexts = context;
return context;
}
static void
PLy_pop_execution_context(void)
{
PLyExecutionContext *context = PLy_execution_contexts;
if (context == NULL)
elog(ERROR, "no Python function is currently executing");
PLy_execution_contexts = context->next;
if (context->scratch_ctx)
MemoryContextDelete(context->scratch_ctx);
pfree(context);
}