You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
postgres/src/backend/libpq/auth.c

3272 lines
87 KiB

/*-------------------------------------------------------------------------
*
* auth.c
* Routines to handle network authentication
*
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/backend/libpq/auth.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include <sys/param.h>
#include <sys/select.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <pwd.h>
#include <unistd.h>
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
#include "commands/user.h"
#include "common/ip.h"
#include "common/md5.h"
#include "libpq/auth.h"
26 years ago
#include "libpq/crypt.h"
#include "libpq/libpq.h"
#include "libpq/pqformat.h"
#include "libpq/sasl.h"
Support SCRAM-SHA-256 authentication (RFC 5802 and 7677). This introduces a new generic SASL authentication method, similar to the GSS and SSPI methods. The server first tells the client which SASL authentication mechanism to use, and then the mechanism-specific SASL messages are exchanged in AuthenticationSASLcontinue and PasswordMessage messages. Only SCRAM-SHA-256 is supported at the moment, but this allows adding more SASL mechanisms in the future, without changing the overall protocol. Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later. The SASLPrep algorithm, for pre-processing the password, is not yet implemented. That could cause trouble, if you use a password with non-ASCII characters, and a client library that does implement SASLprep. That will hopefully be added later. Authorization identities, as specified in the SCRAM-SHA-256 specification, are ignored. SET SESSION AUTHORIZATION provides more or less the same functionality, anyway. If a user doesn't exist, perform a "mock" authentication, by constructing an authentic-looking challenge on the fly. The challenge is derived from a new system-wide random value, "mock authentication nonce", which is created at initdb, and stored in the control file. We go through these motions, in order to not give away the information on whether the user exists, to unauthenticated users. Bumps PG_CONTROL_VERSION, because of the new field in control file. Patch by Michael Paquier and Heikki Linnakangas, reviewed at different stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev, and many others. Discussion: https://www.postgresql.org/message-id/CAB7nPqRbR3GmFYdedCAhzukfKrgBLTLtMvENOmPrVWREsZkF8g%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqSMXU35g%3DW9X74HVeQp0uvgJxvYOuA4A-A3M%2B0wfEBv-w%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/55192AFE.6080106@iki.fi
8 years ago
#include "libpq/scram.h"
#include "miscadmin.h"
#include "port/pg_bswap.h"
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
#include "postmaster/postmaster.h"
#include "replication/walsender.h"
#include "storage/ipc.h"
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
#include "utils/guc.h"
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
#include "utils/memutils.h"
Support SCRAM-SHA-256 authentication (RFC 5802 and 7677). This introduces a new generic SASL authentication method, similar to the GSS and SSPI methods. The server first tells the client which SASL authentication mechanism to use, and then the mechanism-specific SASL messages are exchanged in AuthenticationSASLcontinue and PasswordMessage messages. Only SCRAM-SHA-256 is supported at the moment, but this allows adding more SASL mechanisms in the future, without changing the overall protocol. Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later. The SASLPrep algorithm, for pre-processing the password, is not yet implemented. That could cause trouble, if you use a password with non-ASCII characters, and a client library that does implement SASLprep. That will hopefully be added later. Authorization identities, as specified in the SCRAM-SHA-256 specification, are ignored. SET SESSION AUTHORIZATION provides more or less the same functionality, anyway. If a user doesn't exist, perform a "mock" authentication, by constructing an authentic-looking challenge on the fly. The challenge is derived from a new system-wide random value, "mock authentication nonce", which is created at initdb, and stored in the control file. We go through these motions, in order to not give away the information on whether the user exists, to unauthenticated users. Bumps PG_CONTROL_VERSION, because of the new field in control file. Patch by Michael Paquier and Heikki Linnakangas, reviewed at different stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev, and many others. Discussion: https://www.postgresql.org/message-id/CAB7nPqRbR3GmFYdedCAhzukfKrgBLTLtMvENOmPrVWREsZkF8g%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqSMXU35g%3DW9X74HVeQp0uvgJxvYOuA4A-A3M%2B0wfEBv-w%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/55192AFE.6080106@iki.fi
8 years ago
#include "utils/timestamp.h"
/*----------------------------------------------------------------
* Global authentication functions
*----------------------------------------------------------------
*/
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
static void auth_failed(Port *port, int status, const char *logdetail);
static char *recv_password_packet(Port *port);
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
static void set_authn_id(Port *port, const char *id);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
/*----------------------------------------------------------------
* Password-based authentication methods (password, md5, and scram-sha-256)
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
*----------------------------------------------------------------
*/
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
static int CheckPasswordAuth(Port *port, const char **logdetail);
static int CheckPWChallengeAuth(Port *port, const char **logdetail);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
static int CheckMD5Auth(Port *port, char *shadow_pass,
const char **logdetail);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
/*----------------------------------------------------------------
* Ident authentication
*----------------------------------------------------------------
*/
/* Max size of username ident server can return (per RFC 1413) */
#define IDENT_USERNAME_MAX 512
/* Standard TCP port number for Ident service. Assigned by IANA */
#define IDENT_PORT 113
static int ident_inet(hbaPort *port);
/*----------------------------------------------------------------
* Peer authentication
*----------------------------------------------------------------
*/
static int auth_peer(hbaPort *port);
/*----------------------------------------------------------------
* PAM authentication
*----------------------------------------------------------------
*/
#ifdef USE_PAM
#ifdef HAVE_PAM_PAM_APPL_H
#include <pam/pam_appl.h>
#endif
#ifdef HAVE_SECURITY_PAM_APPL_H
#include <security/pam_appl.h>
#endif
#define PGSQL_PAM_SERVICE "postgresql" /* Service name passed to PAM */
static int CheckPAMAuth(Port *port, const char *user, const char *password);
static int pam_passwd_conv_proc(int num_msg, const struct pam_message **msg,
struct pam_response **resp, void *appdata_ptr);
static struct pam_conv pam_passw_conv = {
&pam_passwd_conv_proc,
NULL
};
static const char *pam_passwd = NULL; /* Workaround for Solaris 2.6
* brokenness */
static Port *pam_port_cludge; /* Workaround for passing "Port *port" into
* pam_passwd_conv_proc */
static bool pam_no_password; /* For detecting no-password-given */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_PAM */
/*----------------------------------------------------------------
* BSD authentication
*----------------------------------------------------------------
*/
#ifdef USE_BSD_AUTH
#include <bsd_auth.h>
static int CheckBSDAuth(Port *port, char *user);
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_BSD_AUTH */
/*----------------------------------------------------------------
* LDAP authentication
*----------------------------------------------------------------
*/
#ifdef USE_LDAP
#ifndef WIN32
/* We use a deprecated function to keep the codepath the same as win32. */
#define LDAP_DEPRECATED 1
#include <ldap.h>
#else
#include <winldap.h>
#endif
static int CheckLDAPAuth(Port *port);
/* LDAP_OPT_DIAGNOSTIC_MESSAGE is the newer spelling */
#ifndef LDAP_OPT_DIAGNOSTIC_MESSAGE
#define LDAP_OPT_DIAGNOSTIC_MESSAGE LDAP_OPT_ERROR_STRING
#endif
/* Default LDAP password mutator hook, can be overridden by a shared library */
static char *dummy_ldap_password_mutator(char *input);
auth_password_hook_typ ldap_password_hook = dummy_ldap_password_mutator;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_LDAP */
/*----------------------------------------------------------------
* Cert authentication
*----------------------------------------------------------------
*/
#ifdef USE_SSL
static int CheckCertAuth(Port *port);
#endif
/*----------------------------------------------------------------
* Kerberos and GSSAPI GUCs
*----------------------------------------------------------------
*/
char *pg_krb_server_keyfile;
bool pg_krb_caseins_users;
bool pg_gss_accept_deleg;
/*----------------------------------------------------------------
* GSSAPI Authentication
*----------------------------------------------------------------
*/
#ifdef ENABLE_GSS
#include "libpq/be-gssapi-common.h"
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
static int pg_GSS_checkauth(Port *port);
static int pg_GSS_recvauth(Port *port);
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* ENABLE_GSS */
/*----------------------------------------------------------------
* SSPI Authentication
*----------------------------------------------------------------
*/
#ifdef ENABLE_SSPI
typedef SECURITY_STATUS
(WINAPI * QUERY_SECURITY_CONTEXT_TOKEN_FN) (PCtxtHandle, void **);
static int pg_SSPI_recvauth(Port *port);
static int pg_SSPI_make_upn(char *accountname,
size_t accountnamesize,
char *domainname,
size_t domainnamesize,
bool update_accountname);
#endif
/*----------------------------------------------------------------
* RADIUS Authentication
*----------------------------------------------------------------
*/
static int CheckRADIUSAuth(Port *port);
static int PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
/*
* Maximum accepted size of GSS and SSPI authentication tokens.
Add heuristic incoming-message-size limits in the server. We had a report of confusing server behavior caused by a client bug that sent junk to the server: the server thought the junk was a very long message length and waited patiently for data that would never come. We can reduce the risk of that by being less trusting about message lengths. For a long time, libpq has had a heuristic rule that it wouldn't believe large message size words, except for a small number of message types that are expected to be (potentially) long. This provides some defense against loss of message-boundary sync and other corrupted-data cases. The server does something similar, except that up to now it only limited the lengths of messages received during the connection authentication phase. Let's do the same as in libpq and put restrictions on the allowed length of all messages, while distinguishing between message types that are expected to be long and those that aren't. I used a limit of 10000 bytes for non-long messages. (libpq's corresponding limit is 30000 bytes, but given the asymmetry of the FE/BE protocol, there's no good reason why the numbers should be the same.) Experimentation suggests that this is at least a factor of 10, maybe a factor of 100, more than we really need; but plenty of daylight seems desirable to avoid false positives. In any case we can adjust the limit based on beta-test results. For long messages, set a limit of MaxAllocSize - 1, which is the most that we can absorb into the StringInfo buffer that the message is collected in. This just serves to make sure that a bogus message size is reported as such, rather than as a confusing gripe about not being able to enlarge a string buffer. While at it, make sure that non-mainline code paths (such as COPY FROM STDIN) are as paranoid as SocketBackend is, and validate the message type code before believing the message length. This provides an additional guard against getting stuck on corrupted input. Discussion: https://postgr.es/m/2003757.1619373089@sss.pgh.pa.us
4 years ago
* We also use this as a limit on ordinary password packet lengths.
*
* Kerberos tickets are usually quite small, but the TGTs issued by Windows
* domain controllers include an authorization field known as the Privilege
* Attribute Certificate (PAC), which contains the user's Windows permissions
* (group memberships etc.). The PAC is copied into all tickets obtained on
* the basis of this TGT (even those issued by Unix realms which the Windows
* realm trusts), and can be several kB in size. The maximum token size
* accepted by Windows systems is determined by the MaxAuthToken Windows
* registry setting. Microsoft recommends that it is not set higher than
* 65535 bytes, so that seems like a reasonable limit for us as well.
*/
#define PG_MAX_AUTH_TOKEN_LENGTH 65535
/*----------------------------------------------------------------
* Global authentication functions
*----------------------------------------------------------------
*/
/*
* This hook allows plugins to get control following client authentication,
* but before the user has been informed about the results. It could be used
* to record login events, insert a delay after failed authentication, etc.
*/
ClientAuthentication_hook_type ClientAuthentication_hook = NULL;
/*
* Tell the user the authentication failed, but not (much about) why.
*
* There is a tradeoff here between security concerns and making life
* unnecessarily difficult for legitimate users. We would not, for example,
* want to report the password we were expecting to receive...
* But it seems useful to report the username and authorization method
* in use, and these are items that must be presumed known to an attacker
* anyway.
* Note that many sorts of failure report additional information in the
* postmaster log, which we hope is only readable by good guys. In
* particular, if logdetail isn't NULL, we send that string to the log.
*/
static void
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
auth_failed(Port *port, int status, const char *logdetail)
{
const char *errstr;
char *cdetail;
int errcode_return = ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION;
/*
* If we failed due to EOF from client, just quit; there's no point in
* trying to send a message to the client, and not much point in logging
* the failure in the postmaster log. (Logging the failure might be
* desirable, were it not for the fact that libpq closes the connection
* unceremoniously if challenged for a password when it hasn't got one to
* send. We'll get a useless log entry for every psql connection under
* password auth, even if it's perfectly successful, if we log STATUS_EOF
* events.)
*/
if (status == STATUS_EOF)
proc_exit(0);
switch (port->hba->auth_method)
{
case uaReject:
case uaImplicitReject:
errstr = gettext_noop("authentication failed for user \"%s\": host rejected");
break;
case uaTrust:
errstr = gettext_noop("\"trust\" authentication failed for user \"%s\"");
break;
case uaIdent:
errstr = gettext_noop("Ident authentication failed for user \"%s\"");
break;
case uaPeer:
errstr = gettext_noop("Peer authentication failed for user \"%s\"");
break;
case uaPassword:
case uaMD5:
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
case uaSCRAM:
errstr = gettext_noop("password authentication failed for user \"%s\"");
/* We use it to indicate if a .pgpass password failed. */
errcode_return = ERRCODE_INVALID_PASSWORD;
break;
case uaGSS:
errstr = gettext_noop("GSSAPI authentication failed for user \"%s\"");
break;
case uaSSPI:
errstr = gettext_noop("SSPI authentication failed for user \"%s\"");
break;
case uaPAM:
errstr = gettext_noop("PAM authentication failed for user \"%s\"");
break;
case uaBSD:
errstr = gettext_noop("BSD authentication failed for user \"%s\"");
break;
case uaLDAP:
errstr = gettext_noop("LDAP authentication failed for user \"%s\"");
break;
case uaCert:
errstr = gettext_noop("certificate authentication failed for user \"%s\"");
break;
case uaRADIUS:
errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
break;
default:
errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
break;
}
cdetail = psprintf(_("Connection matched %s line %d: \"%s\""),
port->hba->sourcefile, port->hba->linenumber,
port->hba->rawline);
if (logdetail)
logdetail = psprintf("%s\n%s", logdetail, cdetail);
else
logdetail = cdetail;
ereport(FATAL,
(errcode(errcode_return),
errmsg(errstr, port->user_name),
logdetail ? errdetail_log("%s", logdetail) : 0));
/* doesn't return */
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
/*
* Sets the authenticated identity for the current user. The provided string
* will be stored into MyClientConnectionInfo, alongside the current HBA
* method in use. The ID will be logged if log_connections is enabled.
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
*
* Auth methods should call this routine exactly once, as soon as the user is
* successfully authenticated, even if they have reasons to know that
* authorization will fail later.
*
* The provided string will be copied into TopMemoryContext, to match the
* lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
* managed by an external library.
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
*/
static void
set_authn_id(Port *port, const char *id)
{
Assert(id);
if (MyClientConnectionInfo.authn_id)
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
{
/*
* An existing authn_id should never be overwritten; that means two
* authentication providers are fighting (or one is fighting itself).
* Don't leak any authn details to the client, but don't let the
* connection continue, either.
*/
ereport(FATAL,
(errmsg("authentication identifier set more than once"),
errdetail_log("previous identifier: \"%s\"; new identifier: \"%s\"",
MyClientConnectionInfo.authn_id, id)));
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
}
MyClientConnectionInfo.authn_id = MemoryContextStrdup(TopMemoryContext, id);
MyClientConnectionInfo.auth_method = port->hba->auth_method;
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
if (Log_connections)
{
ereport(LOG,
errmsg("connection authenticated: identity=\"%s\" method=%s "
"(%s:%d)",
MyClientConnectionInfo.authn_id,
hba_authname(MyClientConnectionInfo.auth_method),
port->hba->sourcefile, port->hba->linenumber));
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
}
}
/*
* Client authentication starts here. If there is an error, this
* function does not return and the backend process is terminated.
*/
void
ClientAuthentication(Port *port)
{
int status = STATUS_ERROR;
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
const char *logdetail = NULL;
/*
* Get the authentication method to use for this frontend/database
* combination. Note: we do not parse the file at this point; this has
* already been done elsewhere. hba.c dropped an error message into the
* server logfile if parsing the hba config file failed.
*/
hba_getauthmethod(port);
CHECK_FOR_INTERRUPTS();
/*
* This is the first point where we have access to the hba record for the
* current connection, so perform any verifications based on the hba
* options field that should be done *before* the authentication here.
*/
if (port->hba->clientcert != clientCertOff)
{
/* If we haven't loaded a root certificate store, fail */
if (!secure_loaded_verify_locations())
ereport(FATAL,
(errcode(ERRCODE_CONFIG_FILE_ERROR),
errmsg("client certificates can only be checked if a root certificate store is available")));
/*
* If we loaded a root certificate store, and if a certificate is
* present on the client, then it has been verified against our root
* certificate store, and the connection would have been aborted
* already if it didn't verify ok.
*/
if (!port->peer_cert_valid)
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
errmsg("connection requires a valid client certificate")));
}
/*
* Now proceed to do the actual authentication check
*/
switch (port->hba->auth_method)
{
case uaReject:
/*
* An explicit "reject" entry in pg_hba.conf. This report exposes
* the fact that there's an explicit reject entry, which is
* perhaps not so desirable from a security standpoint; but the
* message for an implicit reject could confuse the DBA a lot when
* the true situation is a match to an explicit reject. And we
* don't want to change the message for an implicit reject. As
* noted below, the additional information shown here doesn't
* expose anything not known to an attacker.
*/
{
char hostinfo[NI_MAXHOST];
const char *encryption_state;
pg_getnameinfo_all(&port->raddr.addr, port->raddr.salen,
hostinfo, sizeof(hostinfo),
NULL, 0,
NI_NUMERICHOST);
encryption_state =
#ifdef ENABLE_GSS
(port->gss && port->gss->enc) ? _("GSS encryption") :
#endif
#ifdef USE_SSL
port->ssl_in_use ? _("SSL encryption") :
#endif
_("no encryption");
if (am_walsender && !am_db_walsender)
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
/* translator: last %s describes encryption state */
errmsg("pg_hba.conf rejects replication connection for host \"%s\", user \"%s\", %s",
hostinfo, port->user_name,
encryption_state)));
else
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
/* translator: last %s describes encryption state */
errmsg("pg_hba.conf rejects connection for host \"%s\", user \"%s\", database \"%s\", %s",
hostinfo, port->user_name,
port->database_name,
encryption_state)));
break;
}
case uaImplicitReject:
/*
* No matching entry, so tell the user we fell through.
*
* NOTE: the extra info reported here is not a security breach,
* because all that info is known at the frontend and must be
* assumed known to bad guys. We're merely helping out the less
* clueful good guys.
*/
{
char hostinfo[NI_MAXHOST];
const char *encryption_state;
pg_getnameinfo_all(&port->raddr.addr, port->raddr.salen,
hostinfo, sizeof(hostinfo),
NULL, 0,
NI_NUMERICHOST);
encryption_state =
#ifdef ENABLE_GSS
(port->gss && port->gss->enc) ? _("GSS encryption") :
#endif
#ifdef USE_SSL
port->ssl_in_use ? _("SSL encryption") :
#endif
_("no encryption");
#define HOSTNAME_LOOKUP_DETAIL(port) \
Fix assorted issues in client host name lookup. The code for matching clients to pg_hba.conf lines that specify host names (instead of IP address ranges) failed to complain if reverse DNS lookup failed; instead it silently didn't match, so that you might end up getting a surprising "no pg_hba.conf entry for ..." error, as seen in bug #9518 from Mike Blackwell. Since we don't want to make this a fatal error in situations where pg_hba.conf contains a mixture of host names and IP addresses (clients matching one of the numeric entries should not have to have rDNS data), remember the lookup failure and mention it as DETAIL if we get to "no pg_hba.conf entry". Apply the same approach to forward-DNS lookup failures, too, rather than treating them as immediate hard errors. Along the way, fix a couple of bugs that prevented us from detecting an rDNS lookup error reliably, and make sure that we make only one rDNS lookup attempt; formerly, if the lookup attempt failed, the code would try again for each host name entry in pg_hba.conf. Since more or less the whole point of this design is to ensure there's only one lookup attempt not one per entry, the latter point represents a performance bug that seems sufficient justification for back-patching. Also, adjust src/port/getaddrinfo.c so that it plays as well as it can with this code. Which is not all that well, since it does not have actual support for rDNS lookup, but at least it should return the expected (and required by spec) error codes so that the main code correctly perceives the lack of functionality as a lookup failure. It's unlikely that PG is still being used in production on any machines that require our getaddrinfo.c, so I'm not excited about working harder than this. To keep the code in the various branches similar, this includes back-patching commits c424d0d1052cb4053c8712ac44123f9b9a9aa3f2 and 1997f34db4687e671690ed054c8f30bb501b1168 into 9.2 and earlier. Back-patch to 9.1 where the facility for hostnames in pg_hba.conf was introduced.
11 years ago
(port->remote_hostname ? \
(port->remote_hostname_resolv == +1 ? \
errdetail_log("Client IP address resolved to \"%s\", forward lookup matches.", \
port->remote_hostname) : \
port->remote_hostname_resolv == 0 ? \
errdetail_log("Client IP address resolved to \"%s\", forward lookup not checked.", \
port->remote_hostname) : \
port->remote_hostname_resolv == -1 ? \
errdetail_log("Client IP address resolved to \"%s\", forward lookup does not match.", \
port->remote_hostname) : \
port->remote_hostname_resolv == -2 ? \
errdetail_log("Could not translate client host name \"%s\" to IP address: %s.", \
port->remote_hostname, \
gai_strerror(port->remote_hostname_errcode)) : \
0) \
: (port->remote_hostname_resolv == -2 ? \
errdetail_log("Could not resolve client IP address to a host name: %s.", \
gai_strerror(port->remote_hostname_errcode)) : \
0))
if (am_walsender && !am_db_walsender)
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
/* translator: last %s describes encryption state */
errmsg("no pg_hba.conf entry for replication connection from host \"%s\", user \"%s\", %s",
hostinfo, port->user_name,
encryption_state),
HOSTNAME_LOOKUP_DETAIL(port)));
else
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
/* translator: last %s describes encryption state */
errmsg("no pg_hba.conf entry for host \"%s\", user \"%s\", database \"%s\", %s",
hostinfo, port->user_name,
port->database_name,
encryption_state),
HOSTNAME_LOOKUP_DETAIL(port)));
break;
}
case uaGSS:
#ifdef ENABLE_GSS
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
/* We might or might not have the gss workspace already */
if (port->gss == NULL)
port->gss = (pg_gssinfo *)
MemoryContextAllocZero(TopMemoryContext,
sizeof(pg_gssinfo));
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
port->gss->auth = true;
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
/*
* If GSS state was set up while enabling encryption, we can just
* check the client's principal. Otherwise, ask for it.
*/
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
if (port->gss->enc)
status = pg_GSS_checkauth(port);
else
{
sendAuthRequest(port, AUTH_REQ_GSS, NULL, 0);
status = pg_GSS_recvauth(port);
}
#else
Assert(false);
#endif
break;
case uaSSPI:
#ifdef ENABLE_SSPI
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
if (port->gss == NULL)
port->gss = (pg_gssinfo *)
MemoryContextAllocZero(TopMemoryContext,
sizeof(pg_gssinfo));
sendAuthRequest(port, AUTH_REQ_SSPI, NULL, 0);
status = pg_SSPI_recvauth(port);
#else
Assert(false);
#endif
break;
case uaPeer:
status = auth_peer(port);
break;
case uaIdent:
status = ident_inet(port);
break;
case uaMD5:
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
case uaSCRAM:
status = CheckPWChallengeAuth(port, &logdetail);
break;
case uaPassword:
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
status = CheckPasswordAuth(port, &logdetail);
break;
case uaPAM:
#ifdef USE_PAM
status = CheckPAMAuth(port, port->user_name, "");
#else
Assert(false);
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_PAM */
break;
case uaBSD:
#ifdef USE_BSD_AUTH
status = CheckBSDAuth(port, port->user_name);
#else
Assert(false);
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_BSD_AUTH */
break;
case uaLDAP:
#ifdef USE_LDAP
status = CheckLDAPAuth(port);
#else
Assert(false);
#endif
break;
case uaRADIUS:
status = CheckRADIUSAuth(port);
break;
case uaCert:
/* uaCert will be treated as if clientcert=verify-full (uaTrust) */
case uaTrust:
status = STATUS_OK;
break;
}
if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
|| port->hba->auth_method == uaCert)
{
/*
* Make sure we only check the certificate if we use the cert method
* or verify-full option.
*/
#ifdef USE_SSL
status = CheckCertAuth(port);
#else
Assert(false);
#endif
}
if (ClientAuthentication_hook)
(*ClientAuthentication_hook) (port, status);
if (status == STATUS_OK)
sendAuthRequest(port, AUTH_REQ_OK, NULL, 0);
else
auth_failed(port, status, logdetail);
}
/*
* Send an authentication request packet to the frontend.
*/
void
sendAuthRequest(Port *port, AuthRequest areq, const char *extradata, int extralen)
{
StringInfoData buf;
CHECK_FOR_INTERRUPTS();
pq_beginmessage(&buf, 'R');
pq_sendint32(&buf, (int32) areq);
if (extralen > 0)
pq_sendbytes(&buf, extradata, extralen);
pq_endmessage(&buf);
/*
Improve the SASL authentication protocol. This contains some protocol changes to SASL authentiation (which is new in v10): * For future-proofing, in the AuthenticationSASL message that begins SASL authentication, provide a list of SASL mechanisms that the server supports, for the client to choose from. Currently, it's always just SCRAM-SHA-256. * Add a separate authentication message type for the final server->client SASL message, which the client doesn't need to respond to. This makes it unambiguous whether the client is supposed to send a response or not. The SASL mechanism should know that anyway, but better to be explicit. Also, in the server, support clients that don't send an Initial Client response in the first SASLInitialResponse message. The server is supposed to first send an empty request in that case, to which the client will respond with the data that usually comes in the Initial Client Response. libpq uses the Initial Client Response field and doesn't need this, and I would assume any other sensible implementation to use Initial Client Response, too, but let's follow the SASL spec. Improve the documentation on SASL authentication in protocol. Add a section describing the SASL message flow, and some details on our SCRAM-SHA-256 implementation. Document the different kinds of PasswordMessages that the frontend sends in different phases of SASL authentication, as well as GSS/SSPI authentication as separate message formats. Even though they're all 'p' messages, and the exact format depends on the context, describing them as separate message formats makes the documentation more clear. Reviewed by Michael Paquier and Álvaro Hernández Tortosa. Discussion: https://www.postgresql.org/message-id/CAB7nPqS-aFg0iM3AQOJwKDv_0WkAedRjs1W2X8EixSz+sKBXCQ@mail.gmail.com
8 years ago
* Flush message so client will see it, except for AUTH_REQ_OK and
* AUTH_REQ_SASL_FIN, which need not be sent until we are ready for
* queries.
*/
Improve the SASL authentication protocol. This contains some protocol changes to SASL authentiation (which is new in v10): * For future-proofing, in the AuthenticationSASL message that begins SASL authentication, provide a list of SASL mechanisms that the server supports, for the client to choose from. Currently, it's always just SCRAM-SHA-256. * Add a separate authentication message type for the final server->client SASL message, which the client doesn't need to respond to. This makes it unambiguous whether the client is supposed to send a response or not. The SASL mechanism should know that anyway, but better to be explicit. Also, in the server, support clients that don't send an Initial Client response in the first SASLInitialResponse message. The server is supposed to first send an empty request in that case, to which the client will respond with the data that usually comes in the Initial Client Response. libpq uses the Initial Client Response field and doesn't need this, and I would assume any other sensible implementation to use Initial Client Response, too, but let's follow the SASL spec. Improve the documentation on SASL authentication in protocol. Add a section describing the SASL message flow, and some details on our SCRAM-SHA-256 implementation. Document the different kinds of PasswordMessages that the frontend sends in different phases of SASL authentication, as well as GSS/SSPI authentication as separate message formats. Even though they're all 'p' messages, and the exact format depends on the context, describing them as separate message formats makes the documentation more clear. Reviewed by Michael Paquier and Álvaro Hernández Tortosa. Discussion: https://www.postgresql.org/message-id/CAB7nPqS-aFg0iM3AQOJwKDv_0WkAedRjs1W2X8EixSz+sKBXCQ@mail.gmail.com
8 years ago
if (areq != AUTH_REQ_OK && areq != AUTH_REQ_SASL_FIN)
pq_flush();
CHECK_FOR_INTERRUPTS();
}
/*
* Collect password response packet from frontend.
*
* Returns NULL if couldn't get password, else palloc'd string.
*/
static char *
recv_password_packet(Port *port)
{
StringInfoData buf;
int mtype;
Be more careful to not lose sync in the FE/BE protocol. If any error occurred while we were in the middle of reading a protocol message from the client, we could lose sync, and incorrectly try to interpret a part of another message as a new protocol message. That will usually lead to an "invalid frontend message" error that terminates the connection. However, this is a security issue because an attacker might be able to deliberately cause an error, inject a Query message in what's supposed to be just user data, and have the server execute it. We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other operations that could ereport(ERROR) in the middle of processing a message, but a query cancel interrupt or statement timeout could nevertheless cause it to happen. Also, the V2 fastpath and COPY handling were not so careful. It's very difficult to recover in the V2 COPY protocol, so we will just terminate the connection on error. In practice, that's what happened previously anyway, as we lost protocol sync. To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set whenever we're in the middle of reading a message. When it's set, we cannot safely ERROR out and continue running, because we might've read only part of a message. PqCommReadingMsg acts somewhat similarly to critical sections in that if an error occurs while it's set, the error handler will force the connection to be terminated, as if the error was FATAL. It's not implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted to PANIC in critical sections, because we want to be able to use PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes advantage of that to prevent an OOM error from terminating the connection. To prevent unnecessary connection terminations, add a holdoff mechanism similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel interrupts, but still allow die interrupts. The rules on which interrupts are processed when are now a bit more complicated, so refactor ProcessInterrupts() and the calls to it in signal handlers so that the signal handlers always call it if ImmediateInterruptOK is set, and ProcessInterrupts() can decide to not do anything if the other conditions are not met. Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund. Backpatch to all supported versions. Security: CVE-2015-0244
11 years ago
pq_startmsgread();
/* Expect 'p' message type */
mtype = pq_getbyte();
if (mtype != 'p')
{
/*
* If the client just disconnects without offering a password, don't
* make a log entry. This is legal per protocol spec and in fact
* commonly done by psql, so complaining just clutters the log.
*/
if (mtype != EOF)
ereport(ERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("expected password response, got message type %d",
mtype)));
return NULL; /* EOF or bad message type */
}
initStringInfo(&buf);
Add heuristic incoming-message-size limits in the server. We had a report of confusing server behavior caused by a client bug that sent junk to the server: the server thought the junk was a very long message length and waited patiently for data that would never come. We can reduce the risk of that by being less trusting about message lengths. For a long time, libpq has had a heuristic rule that it wouldn't believe large message size words, except for a small number of message types that are expected to be (potentially) long. This provides some defense against loss of message-boundary sync and other corrupted-data cases. The server does something similar, except that up to now it only limited the lengths of messages received during the connection authentication phase. Let's do the same as in libpq and put restrictions on the allowed length of all messages, while distinguishing between message types that are expected to be long and those that aren't. I used a limit of 10000 bytes for non-long messages. (libpq's corresponding limit is 30000 bytes, but given the asymmetry of the FE/BE protocol, there's no good reason why the numbers should be the same.) Experimentation suggests that this is at least a factor of 10, maybe a factor of 100, more than we really need; but plenty of daylight seems desirable to avoid false positives. In any case we can adjust the limit based on beta-test results. For long messages, set a limit of MaxAllocSize - 1, which is the most that we can absorb into the StringInfo buffer that the message is collected in. This just serves to make sure that a bogus message size is reported as such, rather than as a confusing gripe about not being able to enlarge a string buffer. While at it, make sure that non-mainline code paths (such as COPY FROM STDIN) are as paranoid as SocketBackend is, and validate the message type code before believing the message length. This provides an additional guard against getting stuck on corrupted input. Discussion: https://postgr.es/m/2003757.1619373089@sss.pgh.pa.us
4 years ago
if (pq_getmessage(&buf, PG_MAX_AUTH_TOKEN_LENGTH)) /* receive password */
{
/* EOF - pq_getmessage already logged a suitable message */
pfree(buf.data);
return NULL;
}
/*
* Apply sanity check: password packet length should agree with length of
* contained string. Note it is safe to use strlen here because
* StringInfo is guaranteed to have an appended '\0'.
*/
if (strlen(buf.data) + 1 != buf.len)
ereport(ERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("invalid password packet size")));
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
/*
* Don't allow an empty password. Libpq treats an empty password the same
* as no password at all, and won't even try to authenticate. But other
* clients might, so allowing it would be confusing.
*
* Note that this only catches an empty password sent by the client in
* plaintext. There's also a check in CREATE/ALTER USER that prevents an
* empty string from being stored as a user's password in the first place.
* We rely on that for MD5 and SCRAM authentication, but we still need
* this check here, to prevent an empty password from being used with
* authentication methods that check the password against an external
* system, like PAM, LDAP and RADIUS.
*/
if (buf.len == 1)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PASSWORD),
errmsg("empty password returned by client")));
/* Do not echo password to logs, for security. */
elog(DEBUG5, "received password packet");
/*
* Return the received string. Note we do not attempt to do any
* character-set conversion on it; since we don't yet know the client's
* encoding, there wouldn't be much point.
*/
return buf.data;
}
/*----------------------------------------------------------------
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
* Password-based authentication mechanisms
*----------------------------------------------------------------
*/
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
/*
* Plaintext password authentication.
*/
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
static int
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
CheckPasswordAuth(Port *port, const char **logdetail)
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
{
char *passwd;
int result;
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
char *shadow_pass;
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
passwd = recv_password_packet(port);
if (passwd == NULL)
return STATUS_EOF; /* client wouldn't send password */
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
shadow_pass = get_role_password(port->user_name, logdetail);
if (shadow_pass)
{
result = plain_crypt_verify(port->user_name, shadow_pass, passwd,
logdetail);
}
else
result = STATUS_ERROR;
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
if (shadow_pass)
pfree(shadow_pass);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
pfree(passwd);
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
if (result == STATUS_OK)
set_authn_id(port, port->user_name);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
return result;
}
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
/*
* MD5 and SCRAM authentication.
*/
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
static int
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
CheckPWChallengeAuth(Port *port, const char **logdetail)
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
{
int auth_result;
char *shadow_pass;
PasswordType pwtype;
Assert(port->hba->auth_method == uaSCRAM ||
port->hba->auth_method == uaMD5);
/* First look up the user's password. */
shadow_pass = get_role_password(port->user_name, logdetail);
/*
* If the user does not exist, or has no password or it's expired, we
* still go through the motions of authentication, to avoid revealing to
* the client that the user didn't exist. If 'md5' is allowed, we choose
* whether to use 'md5' or 'scram-sha-256' authentication based on current
* password_encryption setting. The idea is that most genuine users
* probably have a password of that type, and if we pretend that this user
* had a password of that type, too, it "blends in" best.
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
*/
if (!shadow_pass)
pwtype = Password_encryption;
else
pwtype = get_password_type(shadow_pass);
/*
* If 'md5' authentication is allowed, decide whether to perform 'md5' or
* 'scram-sha-256' authentication based on the type of password the user
* has. If it's an MD5 hash, we must do MD5 authentication, and if it's a
* SCRAM secret, we must do SCRAM authentication.
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
*
* If MD5 authentication is not allowed, always use SCRAM. If the user
* had an MD5 password, CheckSASLAuth() with the SCRAM mechanism will
* fail.
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
*/
if (port->hba->auth_method == uaMD5 && pwtype == PASSWORD_TYPE_MD5)
auth_result = CheckMD5Auth(port, shadow_pass, logdetail);
else
auth_result = CheckSASLAuth(&pg_be_scram_mech, port, shadow_pass,
logdetail);
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
if (shadow_pass)
pfree(shadow_pass);
/*
* If get_role_password() returned error, return error, even if the
* authentication succeeded.
*/
if (!shadow_pass)
{
Assert(auth_result != STATUS_OK);
return STATUS_ERROR;
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
if (auth_result == STATUS_OK)
set_authn_id(port, port->user_name);
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
return auth_result;
}
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
static int
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
CheckMD5Auth(Port *port, char *shadow_pass, const char **logdetail)
{
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
char md5Salt[4]; /* Password salt */
char *passwd;
int result;
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
if (Db_user_namespace)
ereport(FATAL,
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
errmsg("MD5 authentication is not supported when \"db_user_namespace\" is enabled")));
/* include the salt to use for computing the response */
if (!pg_strong_random(md5Salt, 4))
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
{
ereport(LOG,
(errmsg("could not generate random MD5 salt")));
return STATUS_ERROR;
}
sendAuthRequest(port, AUTH_REQ_MD5, md5Salt, 4);
Replace PostmasterRandom() with a stronger source, second attempt. This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
9 years ago
passwd = recv_password_packet(port);
if (passwd == NULL)
return STATUS_EOF; /* client wouldn't send password */
if (shadow_pass)
Allow SCRAM authentication, when pg_hba.conf says 'md5'. If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
8 years ago
result = md5_crypt_verify(port->user_name, shadow_pass, passwd,
md5Salt, 4, logdetail);
else
result = STATUS_ERROR;
pfree(passwd);
return result;
}
/*----------------------------------------------------------------
* GSSAPI authentication system
*----------------------------------------------------------------
*/
#ifdef ENABLE_GSS
static int
pg_GSS_recvauth(Port *port)
{
OM_uint32 maj_stat,
min_stat,
lmin_s,
gflags;
int mtype;
StringInfoData buf;
gss_buffer_desc gbuf;
gss_cred_id_t delegated_creds;
/*
* Use the configured keytab, if there is one. As we now require MIT
* Kerberos, we might consider using the credential store extensions in
* the future instead of the environment variable.
*/
if (pg_krb_server_keyfile != NULL && pg_krb_server_keyfile[0] != '\0')
{
if (setenv("KRB5_KTNAME", pg_krb_server_keyfile, 1) != 0)
{
/* The only likely failure cause is OOM, so use that errcode */
ereport(FATAL,
(errcode(ERRCODE_OUT_OF_MEMORY),
errmsg("could not set environment: %m")));
}
}
/*
* We accept any service principal that's present in our keytab. This
* increases interoperability between kerberos implementations that see
* for example case sensitivity differently, while not really opening up
* any vector of attack.
*/
port->gss->cred = GSS_C_NO_CREDENTIAL;
/*
* Initialize sequence with an empty context
*/
port->gss->ctx = GSS_C_NO_CONTEXT;
delegated_creds = GSS_C_NO_CREDENTIAL;
port->gss->delegated_creds = false;
/*
* Loop through GSSAPI message exchange. This exchange can consist of
* multiple messages sent in both directions. First message is always from
* the client. All messages from client to server are password packets
* (type 'p').
*/
do
{
Be more careful to not lose sync in the FE/BE protocol. If any error occurred while we were in the middle of reading a protocol message from the client, we could lose sync, and incorrectly try to interpret a part of another message as a new protocol message. That will usually lead to an "invalid frontend message" error that terminates the connection. However, this is a security issue because an attacker might be able to deliberately cause an error, inject a Query message in what's supposed to be just user data, and have the server execute it. We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other operations that could ereport(ERROR) in the middle of processing a message, but a query cancel interrupt or statement timeout could nevertheless cause it to happen. Also, the V2 fastpath and COPY handling were not so careful. It's very difficult to recover in the V2 COPY protocol, so we will just terminate the connection on error. In practice, that's what happened previously anyway, as we lost protocol sync. To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set whenever we're in the middle of reading a message. When it's set, we cannot safely ERROR out and continue running, because we might've read only part of a message. PqCommReadingMsg acts somewhat similarly to critical sections in that if an error occurs while it's set, the error handler will force the connection to be terminated, as if the error was FATAL. It's not implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted to PANIC in critical sections, because we want to be able to use PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes advantage of that to prevent an OOM error from terminating the connection. To prevent unnecessary connection terminations, add a holdoff mechanism similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel interrupts, but still allow die interrupts. The rules on which interrupts are processed when are now a bit more complicated, so refactor ProcessInterrupts() and the calls to it in signal handlers so that the signal handlers always call it if ImmediateInterruptOK is set, and ProcessInterrupts() can decide to not do anything if the other conditions are not met. Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund. Backpatch to all supported versions. Security: CVE-2015-0244
11 years ago
pq_startmsgread();
CHECK_FOR_INTERRUPTS();
mtype = pq_getbyte();
if (mtype != 'p')
{
/* Only log error if client didn't disconnect. */
if (mtype != EOF)
ereport(ERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("expected GSS response, got message type %d",
mtype)));
return STATUS_ERROR;
}
/* Get the actual GSS token */
initStringInfo(&buf);
if (pq_getmessage(&buf, PG_MAX_AUTH_TOKEN_LENGTH))
{
/* EOF - pq_getmessage already logged error */
pfree(buf.data);
return STATUS_ERROR;
}
/* Map to GSSAPI style buffer */
gbuf.length = buf.len;
gbuf.value = buf.data;
elog(DEBUG4, "processing received GSS token of length %u",
(unsigned int) gbuf.length);
maj_stat = gss_accept_sec_context(&min_stat,
&port->gss->ctx,
port->gss->cred,
&gbuf,
GSS_C_NO_CHANNEL_BINDINGS,
&port->gss->name,
NULL,
&port->gss->outbuf,
&gflags,
NULL,
pg_gss_accept_deleg ? &delegated_creds : NULL);
/* gbuf no longer used */
pfree(buf.data);
elog(DEBUG5, "gss_accept_sec_context major: %u, "
"minor: %u, outlen: %u, outflags: %x",
maj_stat, min_stat,
(unsigned int) port->gss->outbuf.length, gflags);
CHECK_FOR_INTERRUPTS();
if (delegated_creds != GSS_C_NO_CREDENTIAL && gflags & GSS_C_DELEG_FLAG)
{
pg_store_delegated_credential(delegated_creds);
port->gss->delegated_creds = true;
}
if (port->gss->outbuf.length != 0)
{
/*
* Negotiation generated data to be sent to the client.
*/
elog(DEBUG4, "sending GSS response token of length %u",
(unsigned int) port->gss->outbuf.length);
sendAuthRequest(port, AUTH_REQ_GSS_CONT,
port->gss->outbuf.value, port->gss->outbuf.length);
gss_release_buffer(&lmin_s, &port->gss->outbuf);
}
if (maj_stat != GSS_S_COMPLETE && maj_stat != GSS_S_CONTINUE_NEEDED)
{
gss_delete_sec_context(&lmin_s, &port->gss->ctx, GSS_C_NO_BUFFER);
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
pg_GSS_error(_("accepting GSS security context failed"),
maj_stat, min_stat);
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
return STATUS_ERROR;
}
if (maj_stat == GSS_S_CONTINUE_NEEDED)
elog(DEBUG4, "GSS continue needed");
} while (maj_stat == GSS_S_CONTINUE_NEEDED);
if (port->gss->cred != GSS_C_NO_CREDENTIAL)
{
/*
* Release service principal credentials
*/
gss_release_cred(&min_stat, &port->gss->cred);
}
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
return pg_GSS_checkauth(port);
}
/*
* Check whether the GSSAPI-authenticated user is allowed to connect as the
* claimed username.
*/
static int
pg_GSS_checkauth(Port *port)
{
int ret;
OM_uint32 maj_stat,
min_stat,
lmin_s;
gss_buffer_desc gbuf;
char *princ;
/*
* Get the name of the user that authenticated, and compare it to the pg
* username that was specified for the connection.
*/
maj_stat = gss_display_name(&min_stat, port->gss->name, &gbuf, NULL);
if (maj_stat != GSS_S_COMPLETE)
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
{
pg_GSS_error(_("retrieving GSS user name failed"),
maj_stat, min_stat);
Fix assorted issues in backend's GSSAPI encryption support. Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
5 years ago
return STATUS_ERROR;
}
/*
* gbuf.value might not be null-terminated, so turn it into a regular
* null-terminated string.
*/
princ = palloc(gbuf.length + 1);
memcpy(princ, gbuf.value, gbuf.length);
princ[gbuf.length] = '\0';
gss_release_buffer(&lmin_s, &gbuf);
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
/*
* Copy the original name of the authenticated principal into our backend
* memory for display later.
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
*
* This is also our authenticated identity. Set it now, rather than
* waiting for the usermap check below, because authentication has already
* succeeded and we want the log file to reflect that.
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
*/
port->gss->princ = MemoryContextStrdup(TopMemoryContext, princ);
set_authn_id(port, princ);
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
6 years ago
/*
* Split the username at the realm separator
*/
if (strchr(princ, '@'))
{
char *cp = strchr(princ, '@');
/*
* If we are not going to include the realm in the username that is
* passed to the ident map, destructively modify it here to remove the
* realm. Then advance past the separator to check the realm.
*/
if (!port->hba->include_realm)
*cp = '\0';
cp++;
if (port->hba->krb_realm != NULL && strlen(port->hba->krb_realm))
{
/*
* Match the realm part of the name first
*/
if (pg_krb_caseins_users)
ret = pg_strcasecmp(port->hba->krb_realm, cp);
else
ret = strcmp(port->hba->krb_realm, cp);
if (ret)
{
/* GSS realm does not match */
elog(DEBUG2,
"GSSAPI realm (%s) and configured realm (%s) don't match",
cp, port->hba->krb_realm);
pfree(princ);
return STATUS_ERROR;
}
}
}
else if (port->hba->krb_realm && strlen(port->hba->krb_realm))
{
elog(DEBUG2,
"GSSAPI did not return realm but realm matching was requested");
pfree(princ);
return STATUS_ERROR;
}
ret = check_usermap(port->hba->usermap, port->user_name, princ,
pg_krb_caseins_users);
pfree(princ);
return ret;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* ENABLE_GSS */
/*----------------------------------------------------------------
* SSPI authentication system
*----------------------------------------------------------------
*/
#ifdef ENABLE_SSPI
/*
* Generate an error for SSPI authentication. The caller should apply
* _() to errmsg to make it translatable.
*/
static void
pg_SSPI_error(int severity, const char *errmsg, SECURITY_STATUS r)
{
char sysmsg[256];
if (FormatMessage(FORMAT_MESSAGE_IGNORE_INSERTS |
FORMAT_MESSAGE_FROM_SYSTEM,
NULL, r, 0,
sysmsg, sizeof(sysmsg), NULL) == 0)
ereport(severity,
(errmsg_internal("%s", errmsg),
errdetail_internal("SSPI error %x", (unsigned int) r)));
else
ereport(severity,
(errmsg_internal("%s", errmsg),
errdetail_internal("%s (%x)", sysmsg, (unsigned int) r)));
}
static int
pg_SSPI_recvauth(Port *port)
{
int mtype;
StringInfoData buf;
SECURITY_STATUS r;
CredHandle sspicred;
CtxtHandle *sspictx = NULL,
newctx;
TimeStamp expiry;
ULONG contextattr;
SecBufferDesc inbuf;
SecBufferDesc outbuf;
SecBuffer OutBuffers[1];
SecBuffer InBuffers[1];
HANDLE token;
TOKEN_USER *tokenuser;
DWORD retlen;
char accountname[MAXPGPATH];
char domainname[MAXPGPATH];
DWORD accountnamesize = sizeof(accountname);
DWORD domainnamesize = sizeof(domainname);
SID_NAME_USE accountnameuse;
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
char *authn_id;
8 years ago
/*
* Acquire a handle to the server credentials.
*/
r = AcquireCredentialsHandle(NULL,
"negotiate",
SECPKG_CRED_INBOUND,
NULL,
NULL,
NULL,
NULL,
&sspicred,
&expiry);
if (r != SEC_E_OK)
pg_SSPI_error(ERROR, _("could not acquire SSPI credentials"), r);
/*
* Loop through SSPI message exchange. This exchange can consist of
* multiple messages sent in both directions. First message is always from
* the client. All messages from client to server are password packets
* (type 'p').
*/
do
{
Be more careful to not lose sync in the FE/BE protocol. If any error occurred while we were in the middle of reading a protocol message from the client, we could lose sync, and incorrectly try to interpret a part of another message as a new protocol message. That will usually lead to an "invalid frontend message" error that terminates the connection. However, this is a security issue because an attacker might be able to deliberately cause an error, inject a Query message in what's supposed to be just user data, and have the server execute it. We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other operations that could ereport(ERROR) in the middle of processing a message, but a query cancel interrupt or statement timeout could nevertheless cause it to happen. Also, the V2 fastpath and COPY handling were not so careful. It's very difficult to recover in the V2 COPY protocol, so we will just terminate the connection on error. In practice, that's what happened previously anyway, as we lost protocol sync. To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set whenever we're in the middle of reading a message. When it's set, we cannot safely ERROR out and continue running, because we might've read only part of a message. PqCommReadingMsg acts somewhat similarly to critical sections in that if an error occurs while it's set, the error handler will force the connection to be terminated, as if the error was FATAL. It's not implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted to PANIC in critical sections, because we want to be able to use PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes advantage of that to prevent an OOM error from terminating the connection. To prevent unnecessary connection terminations, add a holdoff mechanism similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel interrupts, but still allow die interrupts. The rules on which interrupts are processed when are now a bit more complicated, so refactor ProcessInterrupts() and the calls to it in signal handlers so that the signal handlers always call it if ImmediateInterruptOK is set, and ProcessInterrupts() can decide to not do anything if the other conditions are not met. Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund. Backpatch to all supported versions. Security: CVE-2015-0244
11 years ago
pq_startmsgread();
mtype = pq_getbyte();
if (mtype != 'p')
{
if (sspictx != NULL)
{
DeleteSecurityContext(sspictx);
free(sspictx);
}
FreeCredentialsHandle(&sspicred);
/* Only log error if client didn't disconnect. */
if (mtype != EOF)
ereport(ERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("expected SSPI response, got message type %d",
mtype)));
return STATUS_ERROR;
}
/* Get the actual SSPI token */
initStringInfo(&buf);
if (pq_getmessage(&buf, PG_MAX_AUTH_TOKEN_LENGTH))
{
/* EOF - pq_getmessage already logged error */
pfree(buf.data);
if (sspictx != NULL)
{
DeleteSecurityContext(sspictx);
free(sspictx);
}
FreeCredentialsHandle(&sspicred);
return STATUS_ERROR;
}
/* Map to SSPI style buffer */
inbuf.ulVersion = SECBUFFER_VERSION;
inbuf.cBuffers = 1;
inbuf.pBuffers = InBuffers;
InBuffers[0].pvBuffer = buf.data;
InBuffers[0].cbBuffer = buf.len;
InBuffers[0].BufferType = SECBUFFER_TOKEN;
/* Prepare output buffer */
OutBuffers[0].pvBuffer = NULL;
OutBuffers[0].BufferType = SECBUFFER_TOKEN;
OutBuffers[0].cbBuffer = 0;
outbuf.cBuffers = 1;
outbuf.pBuffers = OutBuffers;
outbuf.ulVersion = SECBUFFER_VERSION;
elog(DEBUG4, "processing received SSPI token of length %u",
(unsigned int) buf.len);
r = AcceptSecurityContext(&sspicred,
sspictx,
&inbuf,
ASC_REQ_ALLOCATE_MEMORY,
SECURITY_NETWORK_DREP,
&newctx,
&outbuf,
&contextattr,
NULL);
/* input buffer no longer used */
pfree(buf.data);
if (outbuf.cBuffers > 0 && outbuf.pBuffers[0].cbBuffer > 0)
{
/*
* Negotiation generated data to be sent to the client.
*/
elog(DEBUG4, "sending SSPI response token of length %u",
(unsigned int) outbuf.pBuffers[0].cbBuffer);
port->gss->outbuf.length = outbuf.pBuffers[0].cbBuffer;
port->gss->outbuf.value = outbuf.pBuffers[0].pvBuffer;
sendAuthRequest(port, AUTH_REQ_GSS_CONT,
port->gss->outbuf.value, port->gss->outbuf.length);
FreeContextBuffer(outbuf.pBuffers[0].pvBuffer);
}
if (r != SEC_E_OK && r != SEC_I_CONTINUE_NEEDED)
{
if (sspictx != NULL)
{
DeleteSecurityContext(sspictx);
free(sspictx);
}
FreeCredentialsHandle(&sspicred);
pg_SSPI_error(ERROR,
_("could not accept SSPI security context"), r);
}
/*
* Overwrite the current context with the one we just received. If
* sspictx is NULL it was the first loop and we need to allocate a
* buffer for it. On subsequent runs, we can just overwrite the buffer
* contents since the size does not change.
*/
if (sspictx == NULL)
{
sspictx = malloc(sizeof(CtxtHandle));
if (sspictx == NULL)
ereport(ERROR,
(errmsg("out of memory")));
}
memcpy(sspictx, &newctx, sizeof(CtxtHandle));
if (r == SEC_I_CONTINUE_NEEDED)
elog(DEBUG4, "SSPI continue needed");
} while (r == SEC_I_CONTINUE_NEEDED);
/*
* Release service principal credentials
*/
FreeCredentialsHandle(&sspicred);
/*
* SEC_E_OK indicates that authentication is now complete.
*
* Get the name of the user that authenticated, and compare it to the pg
* username that was specified for the connection.
*/
Replace load of functions by direct calls for some WIN32 This commit changes the following code paths to do direct system calls to some WIN32 functions rather than loading them from an external library, shaving some code in the process: - Creation of restricted tokens in pg_ctl.c, introduced by a25cd81. - QuerySecurityContextToken() in auth.c for SSPI authentication in the backend, introduced in d602592. - CreateRestrictedToken() in src/common/. This change is similar to the case of pg_ctl.c. Most of these functions were loaded rather than directly called because, as mentioned in the code comments, MinGW headers were not declaring them. I have double-checked the recent MinGW code, and all the functions changed here are declared in its headers, so this change should be safe. Note that I do not have a MinGW environment at hand so I have not tested it directly, but that MSVC was fine with the change. The buildfarm will tell soon enough if this change is appropriate or not for a much broader set of environments. A few code paths still use GetProcAddress() to load some functions: - LDAP authentication for ldap_start_tls_sA(), where I am not confident that this change would work. - win32env.c and win32ntdll.c where we have a per-MSVC version dependency for the name of the library loaded. - crashdump.c for MiniDumpWriteDump() and EnumDirTree(), where direct calls were not able to work after testing. Reported-by: Thomas Munro Reviewed-by: Justin Prysby Discussion: https://postgr.es/m/CA+hUKG+BMdcaCe=P-EjMoLTCr3zrrzqbcVE=8h5LyNsSVHKXZA@mail.gmail.com
3 years ago
r = QuerySecurityContextToken(sspictx, &token);
if (r != SEC_E_OK)
pg_SSPI_error(ERROR,
_("could not get token from SSPI security context"), r);
/*
* No longer need the security context, everything from here on uses the
* token instead.
*/
DeleteSecurityContext(sspictx);
free(sspictx);
if (!GetTokenInformation(token, TokenUser, NULL, 0, &retlen) && GetLastError() != 122)
ereport(ERROR,
(errmsg_internal("could not get token information buffer size: error code %lu",
GetLastError())));
tokenuser = malloc(retlen);
if (tokenuser == NULL)
ereport(ERROR,
(errmsg("out of memory")));
if (!GetTokenInformation(token, TokenUser, tokenuser, retlen, &retlen))
ereport(ERROR,
(errmsg_internal("could not get token information: error code %lu",
GetLastError())));
CloseHandle(token);
if (!LookupAccountSid(NULL, tokenuser->User.Sid, accountname, &accountnamesize,
domainname, &domainnamesize, &accountnameuse))
ereport(ERROR,
(errmsg_internal("could not look up account SID: error code %lu",
GetLastError())));
free(tokenuser);
if (!port->hba->compat_realm)
{
int status = pg_SSPI_make_upn(accountname, sizeof(accountname),
domainname, sizeof(domainname),
port->hba->upn_username);
if (status != STATUS_OK)
/* Error already reported from pg_SSPI_make_upn */
return status;
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
/*
* We have all of the information necessary to construct the authenticated
* identity. Set it now, rather than waiting for check_usermap below,
* because authentication has already succeeded and we want the log file
* to reflect that.
*/
if (port->hba->compat_realm)
{
/* SAM-compatible format. */
authn_id = psprintf("%s\\%s", domainname, accountname);
}
else
{
/* Kerberos principal format. */
authn_id = psprintf("%s@%s", accountname, domainname);
}
set_authn_id(port, authn_id);
pfree(authn_id);
/*
* Compare realm/domain if requested. In SSPI, always compare case
* insensitive.
*/
if (port->hba->krb_realm && strlen(port->hba->krb_realm))
{
if (pg_strcasecmp(port->hba->krb_realm, domainname) != 0)
{
elog(DEBUG2,
"SSPI domain (%s) and configured domain (%s) don't match",
domainname, port->hba->krb_realm);
return STATUS_ERROR;
}
}
/*
* We have the username (without domain/realm) in accountname, compare to
* the supplied value. In SSPI, always compare case insensitive.
*
* If set to include realm, append it in <username>@<realm> format.
*/
if (port->hba->include_realm)
{
char *namebuf;
int retval;
namebuf = psprintf("%s@%s", accountname, domainname);
retval = check_usermap(port->hba->usermap, port->user_name, namebuf, true);
pfree(namebuf);
return retval;
}
else
return check_usermap(port->hba->usermap, port->user_name, accountname, true);
}
/*
* Replaces the domainname with the Kerberos realm name,
* and optionally the accountname with the Kerberos user name.
*/
static int
pg_SSPI_make_upn(char *accountname,
size_t accountnamesize,
char *domainname,
size_t domainnamesize,
bool update_accountname)
{
char *samname;
char *upname = NULL;
char *p = NULL;
ULONG upnamesize = 0;
size_t upnamerealmsize;
BOOLEAN res;
/*
* Build SAM name (DOMAIN\user), then translate to UPN
* (user@kerberos.realm). The realm name is returned in lower case, but
* that is fine because in SSPI auth, string comparisons are always
* case-insensitive.
*/
samname = psprintf("%s\\%s", domainname, accountname);
res = TranslateName(samname, NameSamCompatible, NameUserPrincipal,
NULL, &upnamesize);
if ((!res && GetLastError() != ERROR_INSUFFICIENT_BUFFER)
|| upnamesize == 0)
{
pfree(samname);
ereport(LOG,
(errcode(ERRCODE_INVALID_ROLE_SPECIFICATION),
errmsg("could not translate name")));
return STATUS_ERROR;
}
/* upnamesize includes the terminating NUL. */
upname = palloc(upnamesize);
res = TranslateName(samname, NameSamCompatible, NameUserPrincipal,
upname, &upnamesize);
pfree(samname);
if (res)
p = strchr(upname, '@');
if (!res || p == NULL)
{
pfree(upname);
ereport(LOG,
(errcode(ERRCODE_INVALID_ROLE_SPECIFICATION),
errmsg("could not translate name")));
return STATUS_ERROR;
}
/* Length of realm name after the '@', including the NUL. */
upnamerealmsize = upnamesize - (p - upname + 1);
/* Replace domainname with realm name. */
if (upnamerealmsize > domainnamesize)
{
pfree(upname);
ereport(LOG,
(errcode(ERRCODE_INVALID_ROLE_SPECIFICATION),
errmsg("realm name too long")));
return STATUS_ERROR;
}
/* Length is now safe. */
strcpy(domainname, p + 1);
/* Replace account name as well (in case UPN != SAM)? */
if (update_accountname)
{
if ((p - upname + 1) > accountnamesize)
{
pfree(upname);
ereport(LOG,
(errcode(ERRCODE_INVALID_ROLE_SPECIFICATION),
errmsg("translated account name too long")));
return STATUS_ERROR;
}
*p = 0;
strcpy(accountname, upname);
}
pfree(upname);
return STATUS_OK;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* ENABLE_SSPI */
/*----------------------------------------------------------------
* Ident authentication system
*----------------------------------------------------------------
*/
/*
* Parse the string "*ident_response" as a response from a query to an Ident
* server. If it's a normal response indicating a user name, return true
* and store the user name at *ident_user. If it's anything else,
* return false.
*/
static bool
interpret_ident_response(const char *ident_response,
char *ident_user)
{
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
const char *cursor = ident_response; /* Cursor into *ident_response */
/*
* Ident's response, in the telnet tradition, should end in crlf (\r\n).
*/
if (strlen(ident_response) < 2)
return false;
else if (ident_response[strlen(ident_response) - 2] != '\r')
return false;
else
{
while (*cursor != ':' && *cursor != '\r')
cursor++; /* skip port field */
if (*cursor != ':')
return false;
else
{
/* We're positioned to colon before response type field */
char response_type[80];
int i; /* Index into *response_type */
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
i = 0;
while (*cursor != ':' && *cursor != '\r' && !pg_isblank(*cursor) &&
i < (int) (sizeof(response_type) - 1))
response_type[i++] = *cursor++;
response_type[i] = '\0';
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
if (strcmp(response_type, "USERID") != 0)
return false;
else
{
/*
* It's a USERID response. Good. "cursor" should be pointing
* to the colon that precedes the operating system type.
*/
if (*cursor != ':')
return false;
else
{
cursor++; /* Go over colon */
/* Skip over operating system field. */
while (*cursor != ':' && *cursor != '\r')
cursor++;
if (*cursor != ':')
return false;
else
{
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
cursor++; /* Go over colon */
while (pg_isblank(*cursor))
cursor++; /* skip blanks */
/* Rest of line is user name. Copy it over. */
i = 0;
while (*cursor != '\r' && i < IDENT_USERNAME_MAX)
ident_user[i++] = *cursor++;
ident_user[i] = '\0';
return true;
}
}
}
}
}
}
/*
* Talk to the ident server on "remote_addr" and find out who
* owns the tcp connection to "local_addr"
* If the username is successfully retrieved, check the usermap.
*
* XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if the
* latch was set would improve the responsiveness to timeouts/cancellations.
*/
static int
ident_inet(hbaPort *port)
{
const SockAddr remote_addr = port->raddr;
const SockAddr local_addr = port->laddr;
char ident_user[IDENT_USERNAME_MAX + 1];
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
pgsocket sock_fd = PGINVALID_SOCKET; /* for talking to Ident server */
int rc; /* Return code from a locally called function */
bool ident_return;
char remote_addr_s[NI_MAXHOST];
char remote_port[NI_MAXSERV];
char local_addr_s[NI_MAXHOST];
char local_port[NI_MAXSERV];
char ident_port[NI_MAXSERV];
char ident_query[80];
char ident_response[80 + IDENT_USERNAME_MAX];
struct addrinfo *ident_serv = NULL,
*la = NULL,
hints;
/*
* Might look a little weird to first convert it to text and then back to
* sockaddr, but it's protocol independent.
*/
pg_getnameinfo_all(&remote_addr.addr, remote_addr.salen,
remote_addr_s, sizeof(remote_addr_s),
remote_port, sizeof(remote_port),
NI_NUMERICHOST | NI_NUMERICSERV);
pg_getnameinfo_all(&local_addr.addr, local_addr.salen,
local_addr_s, sizeof(local_addr_s),
local_port, sizeof(local_port),
NI_NUMERICHOST | NI_NUMERICSERV);
snprintf(ident_port, sizeof(ident_port), "%d", IDENT_PORT);
hints.ai_flags = AI_NUMERICHOST;
hints.ai_family = remote_addr.addr.ss_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = 0;
hints.ai_addrlen = 0;
hints.ai_canonname = NULL;
hints.ai_addr = NULL;
hints.ai_next = NULL;
rc = pg_getaddrinfo_all(remote_addr_s, ident_port, &hints, &ident_serv);
if (rc || !ident_serv)
{
/* we don't expect this to happen */
ident_return = false;
goto ident_inet_done;
}
hints.ai_flags = AI_NUMERICHOST;
hints.ai_family = local_addr.addr.ss_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = 0;
hints.ai_addrlen = 0;
hints.ai_canonname = NULL;
hints.ai_addr = NULL;
hints.ai_next = NULL;
rc = pg_getaddrinfo_all(local_addr_s, NULL, &hints, &la);
if (rc || !la)
{
/* we don't expect this to happen */
ident_return = false;
goto ident_inet_done;
}
sock_fd = socket(ident_serv->ai_family, ident_serv->ai_socktype,
ident_serv->ai_protocol);
if (sock_fd == PGINVALID_SOCKET)
{
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not create socket for Ident connection: %m")));
ident_return = false;
goto ident_inet_done;
}
/*
* Bind to the address which the client originally contacted, otherwise
* the ident server won't be able to match up the right connection. This
* is necessary if the PostgreSQL server is running on an IP alias.
*/
rc = bind(sock_fd, la->ai_addr, la->ai_addrlen);
if (rc != 0)
{
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not bind to local address \"%s\": %m",
local_addr_s)));
ident_return = false;
goto ident_inet_done;
}
rc = connect(sock_fd, ident_serv->ai_addr,
ident_serv->ai_addrlen);
if (rc != 0)
{
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not connect to Ident server at address \"%s\", port %s: %m",
remote_addr_s, ident_port)));
ident_return = false;
goto ident_inet_done;
}
/* The query we send to the Ident server */
snprintf(ident_query, sizeof(ident_query), "%s,%s\r\n",
remote_port, local_port);
/* loop in case send is interrupted */
do
{
CHECK_FOR_INTERRUPTS();
rc = send(sock_fd, ident_query, strlen(ident_query), 0);
} while (rc < 0 && errno == EINTR);
if (rc < 0)
{
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not send query to Ident server at address \"%s\", port %s: %m",
remote_addr_s, ident_port)));
ident_return = false;
goto ident_inet_done;
}
do
{
CHECK_FOR_INTERRUPTS();
rc = recv(sock_fd, ident_response, sizeof(ident_response) - 1, 0);
} while (rc < 0 && errno == EINTR);
if (rc < 0)
{
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not receive response from Ident server at address \"%s\", port %s: %m",
remote_addr_s, ident_port)));
ident_return = false;
goto ident_inet_done;
}
ident_response[rc] = '\0';
ident_return = interpret_ident_response(ident_response, ident_user);
if (!ident_return)
ereport(LOG,
(errmsg("invalidly formatted response from Ident server: \"%s\"",
ident_response)));
ident_inet_done:
if (sock_fd != PGINVALID_SOCKET)
closesocket(sock_fd);
if (ident_serv)
pg_freeaddrinfo_all(remote_addr.addr.ss_family, ident_serv);
if (la)
pg_freeaddrinfo_all(local_addr.addr.ss_family, la);
if (ident_return)
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
{
/*
* Success! Store the identity, then check the usermap. Note that
* setting the authenticated identity is done before checking the
* usermap, because at this point authentication has succeeded.
*/
set_authn_id(port, ident_user);
return check_usermap(port->hba->usermap, port->user_name, ident_user, false);
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
}
return STATUS_ERROR;
}
/*----------------------------------------------------------------
* Peer authentication system
*----------------------------------------------------------------
*/
/*
* Ask kernel about the credentials of the connecting process,
* determine the symbolic name of the corresponding user, and check
* if valid per the usermap.
*
* Iff authorized, return STATUS_OK, otherwise return STATUS_ERROR.
*/
static int
auth_peer(hbaPort *port)
{
uid_t uid;
gid_t gid;
#ifndef WIN32
struct passwd *pw;
int ret;
#endif
if (getpeereid(port->sock, &uid, &gid) != 0)
{
/* Provide special error message if getpeereid is a stub */
if (errno == ENOSYS)
ereport(LOG,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("peer authentication is not supported on this platform")));
else
ereport(LOG,
(errcode_for_socket_access(),
errmsg("could not get peer credentials: %m")));
return STATUS_ERROR;
}
#ifndef WIN32
errno = 0; /* clear errno before call */
pw = getpwuid(uid);
if (!pw)
{
int save_errno = errno;
ereport(LOG,
(errmsg("could not look up local user ID %ld: %s",
(long) uid,
save_errno ? strerror(save_errno) : _("user does not exist"))));
return STATUS_ERROR;
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
/*
* Make a copy of static getpw*() result area; this is our authenticated
* identity. Set it before calling check_usermap, because authentication
* has already succeeded and we want the log file to reflect that.
*/
set_authn_id(port, pw->pw_name);
ret = check_usermap(port->hba->usermap, port->user_name,
MyClientConnectionInfo.authn_id, false);
return ret;
#else
/* should have failed with ENOSYS above */
Assert(false);
return STATUS_ERROR;
#endif
}
/*----------------------------------------------------------------
* PAM authentication system
*----------------------------------------------------------------
*/
#ifdef USE_PAM
/*
* PAM conversation function
*/
static int
8 years ago
pam_passwd_conv_proc(int num_msg, const struct pam_message **msg,
struct pam_response **resp, void *appdata_ptr)
{
const char *passwd;
struct pam_response *reply;
int i;
if (appdata_ptr)
passwd = (char *) appdata_ptr;
else
{
/*
* Workaround for Solaris 2.6 where the PAM library is broken and does
* not pass appdata_ptr to the conversation routine
*/
passwd = pam_passwd;
}
*resp = NULL; /* in case of error exit */
if (num_msg <= 0 || num_msg > PAM_MAX_NUM_MSG)
return PAM_CONV_ERR;
/*
* Explicitly not using palloc here - PAM will free this memory in
* pam_end()
*/
if ((reply = calloc(num_msg, sizeof(struct pam_response))) == NULL)
{
ereport(LOG,
(errcode(ERRCODE_OUT_OF_MEMORY),
errmsg("out of memory")));
return PAM_CONV_ERR;
}
for (i = 0; i < num_msg; i++)
{
switch (msg[i]->msg_style)
{
case PAM_PROMPT_ECHO_OFF:
if (strlen(passwd) == 0)
{
/*
* Password wasn't passed to PAM the first time around -
* let's go ask the client to send a password, which we
* then stuff into PAM.
*/
sendAuthRequest(pam_port_cludge, AUTH_REQ_PASSWORD, NULL, 0);
passwd = recv_password_packet(pam_port_cludge);
if (passwd == NULL)
{
/*
* Client didn't want to send password. We
* intentionally do not log anything about this,
* either here or at higher levels.
*/
pam_no_password = true;
goto fail;
}
}
if ((reply[i].resp = strdup(passwd)) == NULL)
goto fail;
reply[i].resp_retcode = PAM_SUCCESS;
break;
case PAM_ERROR_MSG:
ereport(LOG,
(errmsg("error from underlying PAM layer: %s",
msg[i]->msg)));
/* FALL THROUGH */
case PAM_TEXT_INFO:
/* we don't bother to log TEXT_INFO messages */
if ((reply[i].resp = strdup("")) == NULL)
goto fail;
reply[i].resp_retcode = PAM_SUCCESS;
break;
default:
ereport(LOG,
(errmsg("unsupported PAM conversation %d/\"%s\"",
msg[i]->msg_style,
msg[i]->msg ? msg[i]->msg : "(none)")));
goto fail;
}
}
*resp = reply;
return PAM_SUCCESS;
fail:
/* free up whatever we allocated */
for (i = 0; i < num_msg; i++)
free(reply[i].resp);
free(reply);
return PAM_CONV_ERR;
}
/*
* Check authentication against PAM.
*/
static int
CheckPAMAuth(Port *port, const char *user, const char *password)
{
int retval;
pam_handle_t *pamh = NULL;
/*
* We can't entirely rely on PAM to pass through appdata --- it appears
* not to work on at least Solaris 2.6. So use these ugly static
* variables instead.
*/
pam_passwd = password;
pam_port_cludge = port;
pam_no_password = false;
/*
* Set the application data portion of the conversation struct. This is
* later used inside the PAM conversation to pass the password to the
* authentication module.
*/
pam_passw_conv.appdata_ptr = unconstify(char *, password); /* from password above,
* not allocated */
/* Optionally, one can set the service name in pg_hba.conf */
if (port->hba->pamservice && port->hba->pamservice[0] != '\0')
retval = pam_start(port->hba->pamservice, "pgsql@",
&pam_passw_conv, &pamh);
else
retval = pam_start(PGSQL_PAM_SERVICE, "pgsql@",
&pam_passw_conv, &pamh);
if (retval != PAM_SUCCESS)
{
ereport(LOG,
(errmsg("could not create PAM authenticator: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL; /* Unset pam_passwd */
return STATUS_ERROR;
}
retval = pam_set_item(pamh, PAM_USER, user);
if (retval != PAM_SUCCESS)
{
ereport(LOG,
(errmsg("pam_set_item(PAM_USER) failed: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL; /* Unset pam_passwd */
return STATUS_ERROR;
}
if (port->hba->conntype != ctLocal)
{
char hostinfo[NI_MAXHOST];
int flags;
if (port->hba->pam_use_hostname)
flags = 0;
else
flags = NI_NUMERICHOST | NI_NUMERICSERV;
retval = pg_getnameinfo_all(&port->raddr.addr, port->raddr.salen,
hostinfo, sizeof(hostinfo), NULL, 0,
flags);
if (retval != 0)
{
ereport(WARNING,
(errmsg_internal("pg_getnameinfo_all() failed: %s",
gai_strerror(retval))));
return STATUS_ERROR;
}
retval = pam_set_item(pamh, PAM_RHOST, hostinfo);
if (retval != PAM_SUCCESS)
{
ereport(LOG,
(errmsg("pam_set_item(PAM_RHOST) failed: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL;
return STATUS_ERROR;
}
}
retval = pam_set_item(pamh, PAM_CONV, &pam_passw_conv);
if (retval != PAM_SUCCESS)
{
ereport(LOG,
(errmsg("pam_set_item(PAM_CONV) failed: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL; /* Unset pam_passwd */
return STATUS_ERROR;
}
retval = pam_authenticate(pamh, 0);
if (retval != PAM_SUCCESS)
{
/* If pam_passwd_conv_proc saw EOF, don't log anything */
if (!pam_no_password)
ereport(LOG,
(errmsg("pam_authenticate failed: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL; /* Unset pam_passwd */
return pam_no_password ? STATUS_EOF : STATUS_ERROR;
}
retval = pam_acct_mgmt(pamh, 0);
if (retval != PAM_SUCCESS)
{
/* If pam_passwd_conv_proc saw EOF, don't log anything */
if (!pam_no_password)
ereport(LOG,
(errmsg("pam_acct_mgmt failed: %s",
pam_strerror(pamh, retval))));
pam_passwd = NULL; /* Unset pam_passwd */
return pam_no_password ? STATUS_EOF : STATUS_ERROR;
}
retval = pam_end(pamh, retval);
if (retval != PAM_SUCCESS)
{
ereport(LOG,
(errmsg("could not release PAM authenticator: %s",
pam_strerror(pamh, retval))));
}
23 years ago
pam_passwd = NULL; /* Unset pam_passwd */
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
if (retval == PAM_SUCCESS)
set_authn_id(port, user);
return (retval == PAM_SUCCESS ? STATUS_OK : STATUS_ERROR);
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_PAM */
/*----------------------------------------------------------------
* BSD authentication system
*----------------------------------------------------------------
*/
#ifdef USE_BSD_AUTH
static int
CheckBSDAuth(Port *port, char *user)
{
char *passwd;
int retval;
/* Send regular password request to client, and get the response */
sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0);
passwd = recv_password_packet(port);
if (passwd == NULL)
return STATUS_EOF;
/*
* Ask the BSD auth system to verify password. Note that auth_userokay
* will overwrite the password string with zeroes, but it's just a
* temporary string so we don't care.
*/
retval = auth_userokay(user, NULL, "auth-postgresql", passwd);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
if (!retval)
return STATUS_ERROR;
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
set_authn_id(port, user);
return STATUS_OK;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_BSD_AUTH */
/*----------------------------------------------------------------
* LDAP authentication system
*----------------------------------------------------------------
*/
#ifdef USE_LDAP
static int errdetail_for_ldap(LDAP *ldap);
/*
* Initialize a connection to the LDAP server, including setting up
* TLS if requested.
*/
static int
InitializeLDAPConnection(Port *port, LDAP **ldap)
{
const char *scheme;
int ldapversion = LDAP_VERSION3;
int r;
scheme = port->hba->ldapscheme;
if (scheme == NULL)
scheme = "ldap";
#ifdef WIN32
if (strcmp(scheme, "ldaps") == 0)
*ldap = ldap_sslinit(port->hba->ldapserver, port->hba->ldapport, 1);
else
*ldap = ldap_init(port->hba->ldapserver, port->hba->ldapport);
if (!*ldap)
{
ereport(LOG,
(errmsg("could not initialize LDAP: error code %d",
(int) LdapGetLastError())));
return STATUS_ERROR;
}
#else
#ifdef HAVE_LDAP_INITIALIZE
/*
* OpenLDAP provides a non-standard extension ldap_initialize() that takes
* a list of URIs, allowing us to request "ldaps" instead of "ldap". It
* also provides ldap_domain2hostlist() to find LDAP servers automatically
* using DNS SRV. They were introduced in the same version, so for now we
* don't have an extra configure check for the latter.
*/
{
StringInfoData uris;
char *hostlist = NULL;
char *p;
bool append_port;
/* We'll build a space-separated scheme://hostname:port list here */
initStringInfo(&uris);
/*
* If pg_hba.conf provided no hostnames, we can ask OpenLDAP to try to
* find some by extracting a domain name from the base DN and looking
* up DSN SRV records for _ldap._tcp.<domain>.
*/
if (!port->hba->ldapserver || port->hba->ldapserver[0] == '\0')
{
char *domain;
/* ou=blah,dc=foo,dc=bar -> foo.bar */
if (ldap_dn2domain(port->hba->ldapbasedn, &domain))
{
ereport(LOG,
(errmsg("could not extract domain name from ldapbasedn")));
return STATUS_ERROR;
}
/* Look up a list of LDAP server hosts and port numbers */
if (ldap_domain2hostlist(domain, &hostlist))
{
ereport(LOG,
(errmsg("LDAP authentication could not find DNS SRV records for \"%s\"",
domain),
(errhint("Set an LDAP server name explicitly."))));
ldap_memfree(domain);
return STATUS_ERROR;
}
ldap_memfree(domain);
/* We have a space-separated list of host:port entries */
p = hostlist;
append_port = false;
}
else
{
/* We have a space-separated list of hosts from pg_hba.conf */
p = port->hba->ldapserver;
append_port = true;
}
/* Convert the list of host[:port] entries to full URIs */
do
{
size_t size;
/* Find the span of the next entry */
size = strcspn(p, " ");
/* Append a space separator if this isn't the first URI */
if (uris.len > 0)
appendStringInfoChar(&uris, ' ');
/* Append scheme://host:port */
appendStringInfoString(&uris, scheme);
appendStringInfoString(&uris, "://");
appendBinaryStringInfo(&uris, p, size);
if (append_port)
appendStringInfo(&uris, ":%d", port->hba->ldapport);
/* Step over this entry and any number of trailing spaces */
p += size;
while (*p == ' ')
++p;
} while (*p);
/* Free memory from OpenLDAP if we looked up SRV records */
if (hostlist)
ldap_memfree(hostlist);
/* Finally, try to connect using the URI list */
r = ldap_initialize(ldap, uris.data);
pfree(uris.data);
if (r != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("could not initialize LDAP: %s",
ldap_err2string(r))));
return STATUS_ERROR;
}
}
#else
if (strcmp(scheme, "ldaps") == 0)
{
ereport(LOG,
(errmsg("ldaps not supported with this LDAP library")));
return STATUS_ERROR;
}
*ldap = ldap_init(port->hba->ldapserver, port->hba->ldapport);
if (!*ldap)
{
ereport(LOG,
(errmsg("could not initialize LDAP: %m")));
return STATUS_ERROR;
}
#endif
#endif
if ((r = ldap_set_option(*ldap, LDAP_OPT_PROTOCOL_VERSION, &ldapversion)) != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("could not set LDAP protocol version: %s",
ldap_err2string(r)),
errdetail_for_ldap(*ldap)));
ldap_unbind(*ldap);
return STATUS_ERROR;
}
if (port->hba->ldaptls)
{
#ifndef WIN32
if ((r = ldap_start_tls_s(*ldap, NULL, NULL)) != LDAP_SUCCESS)
#else
if ((r = ldap_start_tls_s(*ldap, NULL, NULL, NULL, NULL)) != LDAP_SUCCESS)
#endif
{
ereport(LOG,
(errmsg("could not start LDAP TLS session: %s",
ldap_err2string(r)),
errdetail_for_ldap(*ldap)));
ldap_unbind(*ldap);
return STATUS_ERROR;
}
}
return STATUS_OK;
}
/* Placeholders recognized by FormatSearchFilter. For now just one. */
#define LPH_USERNAME "$username"
#define LPH_USERNAME_LEN (sizeof(LPH_USERNAME) - 1)
/* Not all LDAP implementations define this. */
#ifndef LDAP_NO_ATTRS
#define LDAP_NO_ATTRS "1.1"
#endif
/* Not all LDAP implementations define this. */
#ifndef LDAPS_PORT
#define LDAPS_PORT 636
#endif
static char *
dummy_ldap_password_mutator(char *input)
{
return input;
}
/*
* Return a newly allocated C string copied from "pattern" with all
* occurrences of the placeholder "$username" replaced with "user_name".
*/
static char *
FormatSearchFilter(const char *pattern, const char *user_name)
{
StringInfoData output;
initStringInfo(&output);
while (*pattern != '\0')
{
if (strncmp(pattern, LPH_USERNAME, LPH_USERNAME_LEN) == 0)
{
appendStringInfoString(&output, user_name);
pattern += LPH_USERNAME_LEN;
}
else
appendStringInfoChar(&output, *pattern++);
}
return output.data;
}
/*
* Perform LDAP authentication
*/
static int
CheckLDAPAuth(Port *port)
{
char *passwd;
LDAP *ldap;
int r;
char *fulluser;
const char *server_name;
#ifdef HAVE_LDAP_INITIALIZE
/*
* For OpenLDAP, allow empty hostname if we have a basedn. We'll look for
* servers with DNS SRV records via OpenLDAP library facilities.
*/
if ((!port->hba->ldapserver || port->hba->ldapserver[0] == '\0') &&
(!port->hba->ldapbasedn || port->hba->ldapbasedn[0] == '\0'))
{
ereport(LOG,
(errmsg("LDAP server not specified, and no ldapbasedn")));
return STATUS_ERROR;
}
#else
if (!port->hba->ldapserver || port->hba->ldapserver[0] == '\0')
{
ereport(LOG,
(errmsg("LDAP server not specified")));
return STATUS_ERROR;
}
#endif
/*
* If we're using SRV records, we don't have a server name so we'll just
* show an empty string in error messages.
*/
server_name = port->hba->ldapserver ? port->hba->ldapserver : "";
if (port->hba->ldapport == 0)
{
if (port->hba->ldapscheme != NULL &&
strcmp(port->hba->ldapscheme, "ldaps") == 0)
port->hba->ldapport = LDAPS_PORT;
else
port->hba->ldapport = LDAP_PORT;
}
sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0);
passwd = recv_password_packet(port);
if (passwd == NULL)
return STATUS_EOF; /* client wouldn't send password */
if (InitializeLDAPConnection(port, &ldap) == STATUS_ERROR)
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
{
/* Error message already sent */
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_ERROR;
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
}
if (port->hba->ldapbasedn)
{
/*
* First perform an LDAP search to find the DN for the user we are
* trying to log in as.
*/
char *filter;
LDAPMessage *search_message;
LDAPMessage *entry;
char *attributes[] = {LDAP_NO_ATTRS, NULL};
char *dn;
char *c;
int count;
/*
* Disallow any characters that we would otherwise need to escape,
* since they aren't really reasonable in a username anyway. Allowing
* them would make it possible to inject any kind of custom filters in
* the LDAP filter.
*/
for (c = port->user_name; *c; c++)
{
if (*c == '*' ||
*c == '(' ||
*c == ')' ||
*c == '\\' ||
*c == '/')
{
ereport(LOG,
16 years ago
(errmsg("invalid character in user name for LDAP authentication")));
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_ERROR;
}
}
/*
* Bind with a pre-defined username/password (if available) for
* searching. If none is specified, this turns into an anonymous bind.
*/
r = ldap_simple_bind_s(ldap,
port->hba->ldapbinddn ? port->hba->ldapbinddn : "",
port->hba->ldapbindpasswd ? ldap_password_hook(port->hba->ldapbindpasswd) : "");
if (r != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("could not perform initial LDAP bind for ldapbinddn \"%s\" on server \"%s\": %s",
port->hba->ldapbinddn ? port->hba->ldapbinddn : "",
server_name,
ldap_err2string(r)),
errdetail_for_ldap(ldap)));
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_ERROR;
}
/* Build a custom filter or a single attribute filter? */
if (port->hba->ldapsearchfilter)
filter = FormatSearchFilter(port->hba->ldapsearchfilter, port->user_name);
else if (port->hba->ldapsearchattribute)
filter = psprintf("(%s=%s)", port->hba->ldapsearchattribute, port->user_name);
else
filter = psprintf("(uid=%s)", port->user_name);
search_message = NULL;
r = ldap_search_s(ldap,
port->hba->ldapbasedn,
port->hba->ldapscope,
filter,
attributes,
0,
&search_message);
if (r != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("could not search LDAP for filter \"%s\" on server \"%s\": %s",
filter, server_name, ldap_err2string(r)),
errdetail_for_ldap(ldap)));
if (search_message != NULL)
ldap_msgfree(search_message);
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(filter);
return STATUS_ERROR;
}
count = ldap_count_entries(ldap, search_message);
if (count != 1)
{
if (count == 0)
ereport(LOG,
(errmsg("LDAP user \"%s\" does not exist", port->user_name),
errdetail("LDAP search for filter \"%s\" on server \"%s\" returned no entries.",
filter, server_name)));
else
ereport(LOG,
(errmsg("LDAP user \"%s\" is not unique", port->user_name),
errdetail_plural("LDAP search for filter \"%s\" on server \"%s\" returned %d entry.",
"LDAP search for filter \"%s\" on server \"%s\" returned %d entries.",
count,
filter, server_name, count)));
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(filter);
ldap_msgfree(search_message);
return STATUS_ERROR;
}
entry = ldap_first_entry(ldap, search_message);
dn = ldap_get_dn(ldap, entry);
if (dn == NULL)
{
int error;
(void) ldap_get_option(ldap, LDAP_OPT_ERROR_NUMBER, &error);
ereport(LOG,
(errmsg("could not get dn for the first entry matching \"%s\" on server \"%s\": %s",
filter, server_name,
ldap_err2string(error)),
errdetail_for_ldap(ldap)));
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(filter);
ldap_msgfree(search_message);
return STATUS_ERROR;
}
fulluser = pstrdup(dn);
pfree(filter);
ldap_memfree(dn);
ldap_msgfree(search_message);
/* Unbind and disconnect from the LDAP server */
r = ldap_unbind_s(ldap);
if (r != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("could not unbind after searching for user \"%s\" on server \"%s\"",
fulluser, server_name)));
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(fulluser);
return STATUS_ERROR;
}
/*
* Need to re-initialize the LDAP connection, so that we can bind to
* it with a different username.
*/
if (InitializeLDAPConnection(port, &ldap) == STATUS_ERROR)
{
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(fulluser);
/* Error message already sent */
return STATUS_ERROR;
}
}
else
fulluser = psprintf("%s%s%s",
port->hba->ldapprefix ? port->hba->ldapprefix : "",
port->user_name,
port->hba->ldapsuffix ? port->hba->ldapsuffix : "");
r = ldap_simple_bind_s(ldap, fulluser, passwd);
if (r != LDAP_SUCCESS)
{
ereport(LOG,
(errmsg("LDAP login failed for user \"%s\" on server \"%s\": %s",
fulluser, server_name, ldap_err2string(r)),
errdetail_for_ldap(ldap)));
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(fulluser);
return STATUS_ERROR;
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
/* Save the original bind DN as the authenticated identity. */
set_authn_id(port, fulluser);
ldap_unbind(ldap);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
pfree(fulluser);
return STATUS_OK;
}
/*
* Add a detail error message text to the current error if one can be
* constructed from the LDAP 'diagnostic message'.
*/
static int
errdetail_for_ldap(LDAP *ldap)
{
char *message;
int rc;
rc = ldap_get_option(ldap, LDAP_OPT_DIAGNOSTIC_MESSAGE, &message);
if (rc == LDAP_SUCCESS && message != NULL)
{
errdetail("LDAP diagnostics: %s", message);
ldap_memfree(message);
}
return 0;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
#endif /* USE_LDAP */
/*----------------------------------------------------------------
* SSL client certificate authentication
*----------------------------------------------------------------
*/
#ifdef USE_SSL
static int
CheckCertAuth(Port *port)
{
int status_check_usermap = STATUS_ERROR;
char *peer_username = NULL;
Assert(port->ssl);
/* select the correct field to compare */
switch (port->hba->clientcertname)
{
case clientCertDN:
peer_username = port->peer_dn;
break;
case clientCertCN:
peer_username = port->peer_cn;
}
/* Make sure we have received a username in the certificate */
if (peer_username == NULL ||
strlen(peer_username) <= 0)
{
ereport(LOG,
15 years ago
(errmsg("certificate authentication failed for user \"%s\": client certificate contains no user name",
port->user_name)));
return STATUS_ERROR;
}
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
if (port->hba->auth_method == uaCert)
{
/*
* For cert auth, the client's Subject DN is always our authenticated
* identity, even if we're only using its CN for authorization. Set
* it now, rather than waiting for check_usermap() below, because
* authentication has already succeeded and we want the log file to
* reflect that.
*/
if (!port->peer_dn)
{
/*
* This should not happen as both peer_dn and peer_cn should be
* set in this context.
*/
ereport(LOG,
(errmsg("certificate authentication failed for user \"%s\": unable to retrieve subject DN",
port->user_name)));
return STATUS_ERROR;
}
set_authn_id(port, port->peer_dn);
}
/* Just pass the certificate cn/dn to the usermap check */
status_check_usermap = check_usermap(port->hba->usermap, port->user_name, peer_username, false);
if (status_check_usermap != STATUS_OK)
{
/*
* If clientcert=verify-full was specified and the authentication
* method is other than uaCert, log the reason for rejecting the
* authentication.
*/
if (port->hba->clientcert == clientCertFull && port->hba->auth_method != uaCert)
{
switch (port->hba->clientcertname)
{
case clientCertDN:
ereport(LOG,
(errmsg("certificate validation (clientcert=verify-full) failed for user \"%s\": DN mismatch",
port->user_name)));
break;
case clientCertCN:
ereport(LOG,
(errmsg("certificate validation (clientcert=verify-full) failed for user \"%s\": CN mismatch",
port->user_name)));
}
}
}
return status_check_usermap;
}
#endif
/*----------------------------------------------------------------
* RADIUS authentication
*----------------------------------------------------------------
*/
/*
* RADIUS authentication is described in RFC2865 (and several others).
*/
#define RADIUS_VECTOR_LENGTH 16
#define RADIUS_HEADER_LENGTH 20
#define RADIUS_MAX_PASSWORD_LENGTH 128
/* Maximum size of a RADIUS packet we will create or accept */
#define RADIUS_BUFFER_SIZE 1024
typedef struct
{
uint8 attribute;
uint8 length;
uint8 data[FLEXIBLE_ARRAY_MEMBER];
} radius_attribute;
typedef struct
{
uint8 code;
uint8 id;
uint16 length;
uint8 vector[RADIUS_VECTOR_LENGTH];
/* this is a bit longer than strictly necessary: */
char pad[RADIUS_BUFFER_SIZE - RADIUS_VECTOR_LENGTH];
} radius_packet;
/* RADIUS packet types */
#define RADIUS_ACCESS_REQUEST 1
#define RADIUS_ACCESS_ACCEPT 2
#define RADIUS_ACCESS_REJECT 3
/* RADIUS attributes */
#define RADIUS_USER_NAME 1
#define RADIUS_PASSWORD 2
#define RADIUS_SERVICE_TYPE 6
#define RADIUS_NAS_IDENTIFIER 32
/* RADIUS service types */
#define RADIUS_AUTHENTICATE_ONLY 8
/* Seconds to wait - XXX: should be in a config variable! */
#define RADIUS_TIMEOUT 3
static void
radius_add_attribute(radius_packet *packet, uint8 type, const unsigned char *data, int len)
{
radius_attribute *attr;
if (packet->length + len > RADIUS_BUFFER_SIZE)
{
/*
* With remotely realistic data, this can never happen. But catch it
* just to make sure we don't overrun a buffer. We'll just skip adding
* the broken attribute, which will in the end cause authentication to
* fail.
*/
elog(WARNING,
"adding attribute code %d with length %d to radius packet would create oversize packet, ignoring",
type, len);
return;
}
attr = (radius_attribute *) ((unsigned char *) packet + packet->length);
attr->attribute = type;
attr->length = len + 2; /* total size includes type and length */
memcpy(attr->data, data, len);
packet->length += attr->length;
}
static int
CheckRADIUSAuth(Port *port)
{
char *passwd;
ListCell *server,
*secrets,
*radiusports,
*identifiers;
/* Make sure struct alignment is correct */
Assert(offsetof(radius_packet, vector) == 4);
/* Verify parameters */
if (port->hba->radiusservers == NIL)
{
ereport(LOG,
(errmsg("RADIUS server not specified")));
return STATUS_ERROR;
}
if (port->hba->radiussecrets == NIL)
{
ereport(LOG,
(errmsg("RADIUS secret not specified")));
return STATUS_ERROR;
}
/* Send regular password request to client, and get the response */
sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0);
passwd = recv_password_packet(port);
if (passwd == NULL)
return STATUS_EOF; /* client wouldn't send password */
if (strlen(passwd) > RADIUS_MAX_PASSWORD_LENGTH)
{
ereport(LOG,
(errmsg("RADIUS authentication does not support passwords longer than %d characters", RADIUS_MAX_PASSWORD_LENGTH)));
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_ERROR;
}
/*
* Loop over and try each server in order.
*/
secrets = list_head(port->hba->radiussecrets);
radiusports = list_head(port->hba->radiusports);
identifiers = list_head(port->hba->radiusidentifiers);
foreach(server, port->hba->radiusservers)
{
int ret = PerformRadiusTransaction(lfirst(server),
lfirst(secrets),
radiusports ? lfirst(radiusports) : NULL,
identifiers ? lfirst(identifiers) : NULL,
port->user_name,
passwd);
/*------
* STATUS_OK = Login OK
* STATUS_ERROR = Login not OK, but try next server
* STATUS_EOF = Login not OK, and don't try next server
*------
*/
if (ret == STATUS_OK)
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
{
Add some information about authenticated identity via log_connections The "authenticated identity" is the string used by an authentication method to identify a particular user. In many common cases, this is the same as the PostgreSQL username, but for some third-party authentication methods, the identifier in use may be shortened or otherwise translated (e.g. through pg_ident user mappings) before the server stores it. To help administrators see who has actually interacted with the system, this commit adds the capability to store the original identity when authentication succeeds within the backend's Port, and generates a log entry when log_connections is enabled. The log entries generated look something like this (where a local user named "foouser" is connecting to the database as the database user called "admin"): LOG: connection received: host=[local] LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88) LOG: connection authorized: user=admin database=postgres application_name=psql Port->authn_id is set according to the authentication method: bsd: the PostgreSQL username (aka the local username) cert: the client's Subject DN gss: the user principal ident: the remote username ldap: the final bind DN pam: the PostgreSQL username (aka PAM username) password (and all pw-challenge methods): the PostgreSQL username peer: the peer's pw_name radius: the PostgreSQL username (aka the RADIUS username) sspi: either the down-level (SAM-compatible) logon name, if compat_realm=1, or the User Principal Name if compat_realm=0 The trust auth method does not set an authenticated identity. Neither does clientcert=verify-full. Port->authn_id could be used for other purposes, like a superuser-only extra column in pg_stat_activity, but this is left as future work. PostgresNode::connect_{ok,fails}() have been modified to let tests check the backend log files for required or prohibited patterns, using the new log_like and log_unlike parameters. This uses a method based on a truncation of the existing server log file, like issues_sql_like(). Tests are added to the ldap, kerberos, authentication and SSL test suites. Author: Jacob Champion Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
4 years ago
set_authn_id(port, port->user_name);
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_OK;
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
}
else if (ret == STATUS_EOF)
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
{
pfree(passwd);
return STATUS_ERROR;
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
}
/*
* secret, port and identifiers either have length 0 (use default),
* length 1 (use the same everywhere) or the same length as servers.
* So if the length is >1, we advance one step. In other cases, we
* don't and will then reuse the correct value.
*/
if (list_length(port->hba->radiussecrets) > 1)
Represent Lists as expansible arrays, not chains of cons-cells. Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
6 years ago
secrets = lnext(port->hba->radiussecrets, secrets);
if (list_length(port->hba->radiusports) > 1)
Represent Lists as expansible arrays, not chains of cons-cells. Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
6 years ago
radiusports = lnext(port->hba->radiusports, radiusports);
if (list_length(port->hba->radiusidentifiers) > 1)
Represent Lists as expansible arrays, not chains of cons-cells. Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
6 years ago
identifiers = lnext(port->hba->radiusidentifiers, identifiers);
}
/* No servers left to try, so give up */
Don't allow logging in with empty password. Some authentication methods allowed it, others did not. In the client-side, libpq does not even try to authenticate with an empty password, which makes using empty passwords hazardous: an administrator might think that an account with an empty password cannot be used to log in, because psql doesn't allow it, and not realize that a different client would in fact allow it. To clear that confusion and to be be consistent, disallow empty passwords in all authentication methods. All the authentication methods that used plaintext authentication over the wire, except for BSD authentication, already checked that the password received from the user was not empty. To avoid forgetting it in the future again, move the check to the recv_password_packet function. That only forbids using an empty password with plaintext authentication, however. MD5 and SCRAM need a different fix: * In stable branches, check that the MD5 hash stored for the user does not not correspond to an empty string. This adds some overhead to MD5 authentication, because the server needs to compute an extra MD5 hash, but it is not noticeable in practice. * In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty string, or a password hash that corresponds to an empty string, is specified. The user-visible behavior is the same as in the stable branches, the user cannot log in, but it seems better to stop the empty password from entering the system in the first place. Secondly, it is fairly expensive to check that a SCRAM hash doesn't correspond to an empty string, because computing a SCRAM hash is much more expensive than an MD5 hash by design, so better avoid doing that on every authentication. We could clear the password on CREATE/ALTER ROLE also in stable branches, but we would still need to check at authentication time, because even if we prevent empty passwords from being stored in pg_authid, there might be existing ones there already. Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema. Security: CVE-2017-7546
8 years ago
pfree(passwd);
return STATUS_ERROR;
}
static int
PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd)
{
radius_packet radius_send_pack;
radius_packet radius_recv_pack;
radius_packet *packet = &radius_send_pack;
radius_packet *receivepacket = &radius_recv_pack;
char *radius_buffer = (char *) &radius_send_pack;
char *receive_buffer = (char *) &radius_recv_pack;
int32 service = pg_hton32(RADIUS_AUTHENTICATE_ONLY);
uint8 *cryptvector;
int encryptedpasswordlen;
uint8 encryptedpassword[RADIUS_MAX_PASSWORD_LENGTH];
uint8 *md5trailer;
int packetlength;
pgsocket sock;
struct sockaddr_in6 localaddr;
struct sockaddr_in6 remoteaddr;
struct addrinfo hint;
struct addrinfo *serveraddrs;
int port;
socklen_t addrsize;
fd_set fdset;
struct timeval endtime;
int i,
j,
r;
/* Assign default values */
if (portstr == NULL)
portstr = "1812";
if (identifier == NULL)
identifier = "postgresql";
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_DGRAM;
hint.ai_family = AF_UNSPEC;
port = atoi(portstr);
r = pg_getaddrinfo_all(server, portstr, &hint, &serveraddrs);
if (r || !serveraddrs)
{
ereport(LOG,
(errmsg("could not translate RADIUS server name \"%s\" to address: %s",
server, gai_strerror(r))));
if (serveraddrs)
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
/* XXX: add support for multiple returned addresses? */
/* Construct RADIUS packet */
packet->code = RADIUS_ACCESS_REQUEST;
packet->length = RADIUS_HEADER_LENGTH;
if (!pg_strong_random(packet->vector, RADIUS_VECTOR_LENGTH))
{
ereport(LOG,
(errmsg("could not generate random encryption vector")));
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
packet->id = packet->vector[0];
radius_add_attribute(packet, RADIUS_SERVICE_TYPE, (const unsigned char *) &service, sizeof(service));
radius_add_attribute(packet, RADIUS_USER_NAME, (const unsigned char *) user_name, strlen(user_name));
radius_add_attribute(packet, RADIUS_NAS_IDENTIFIER, (const unsigned char *) identifier, strlen(identifier));
/*
* RADIUS password attributes are calculated as: e[0] = p[0] XOR
* MD5(secret + Request Authenticator) for the first group of 16 octets,
* and then: e[i] = p[i] XOR MD5(secret + e[i-1]) for the following ones
* (if necessary)
*/
encryptedpasswordlen = ((strlen(passwd) + RADIUS_VECTOR_LENGTH - 1) / RADIUS_VECTOR_LENGTH) * RADIUS_VECTOR_LENGTH;
cryptvector = palloc(strlen(secret) + RADIUS_VECTOR_LENGTH);
memcpy(cryptvector, secret, strlen(secret));
/* for the first iteration, we use the Request Authenticator vector */
md5trailer = packet->vector;
for (i = 0; i < encryptedpasswordlen; i += RADIUS_VECTOR_LENGTH)
{
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
const char *errstr = NULL;
memcpy(cryptvector + strlen(secret), md5trailer, RADIUS_VECTOR_LENGTH);
/*
* .. and for subsequent iterations the result of the previous XOR
* (calculated below)
*/
md5trailer = encryptedpassword + i;
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
if (!pg_md5_binary(cryptvector, strlen(secret) + RADIUS_VECTOR_LENGTH,
encryptedpassword + i, &errstr))
{
ereport(LOG,
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
(errmsg("could not perform MD5 encryption of password: %s",
errstr)));
pfree(cryptvector);
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
for (j = i; j < i + RADIUS_VECTOR_LENGTH; j++)
{
if (j < strlen(passwd))
encryptedpassword[j] = passwd[j] ^ encryptedpassword[j];
else
encryptedpassword[j] = '\0' ^ encryptedpassword[j];
}
}
pfree(cryptvector);
radius_add_attribute(packet, RADIUS_PASSWORD, encryptedpassword, encryptedpasswordlen);
/* Length needs to be in network order on the wire */
packetlength = packet->length;
packet->length = pg_hton16(packet->length);
sock = socket(serveraddrs[0].ai_family, SOCK_DGRAM, 0);
if (sock == PGINVALID_SOCKET)
{
ereport(LOG,
(errmsg("could not create RADIUS socket: %m")));
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
memset(&localaddr, 0, sizeof(localaddr));
localaddr.sin6_family = serveraddrs[0].ai_family;
localaddr.sin6_addr = in6addr_any;
if (localaddr.sin6_family == AF_INET6)
addrsize = sizeof(struct sockaddr_in6);
else
addrsize = sizeof(struct sockaddr_in);
8 years ago
if (bind(sock, (struct sockaddr *) &localaddr, addrsize))
{
ereport(LOG,
(errmsg("could not bind local RADIUS socket: %m")));
closesocket(sock);
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
if (sendto(sock, radius_buffer, packetlength, 0,
serveraddrs[0].ai_addr, serveraddrs[0].ai_addrlen) < 0)
{
ereport(LOG,
(errmsg("could not send RADIUS packet: %m")));
closesocket(sock);
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
return STATUS_ERROR;
}
/* Don't need the server address anymore */
pg_freeaddrinfo_all(hint.ai_family, serveraddrs);
/*
* Figure out at what time we should time out. We can't just use a single
* call to select() with a timeout, since somebody can be sending invalid
* packets to our port thus causing us to retry in a loop and never time
* out.
*
* XXX: Using WaitLatchOrSocket() and doing a CHECK_FOR_INTERRUPTS() if
* the latch was set would improve the responsiveness to
* timeouts/cancellations.
*/
gettimeofday(&endtime, NULL);
endtime.tv_sec += RADIUS_TIMEOUT;
while (true)
{
struct timeval timeout;
struct timeval now;
int64 timeoutval;
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
const char *errstr = NULL;
gettimeofday(&now, NULL);
timeoutval = (endtime.tv_sec * 1000000 + endtime.tv_usec) - (now.tv_sec * 1000000 + now.tv_usec);
if (timeoutval <= 0)
{
ereport(LOG,
(errmsg("timeout waiting for RADIUS response from %s",
server)));
closesocket(sock);
return STATUS_ERROR;
}
timeout.tv_sec = timeoutval / 1000000;
timeout.tv_usec = timeoutval % 1000000;
FD_ZERO(&fdset);
FD_SET(sock, &fdset);
r = select(sock + 1, &fdset, NULL, NULL, &timeout);
if (r < 0)
{
if (errno == EINTR)
continue;
/* Anything else is an actual error */
ereport(LOG,
(errmsg("could not check status on RADIUS socket: %m")));
closesocket(sock);
return STATUS_ERROR;
}
if (r == 0)
{
ereport(LOG,
(errmsg("timeout waiting for RADIUS response from %s",
server)));
closesocket(sock);
return STATUS_ERROR;
}
/*
* Attempt to read the response packet, and verify the contents.
*
* Any packet that's not actually a RADIUS packet, or otherwise does
* not validate as an explicit reject, is just ignored and we retry
* for another packet (until we reach the timeout). This is to avoid
* the possibility to denial-of-service the login by flooding the
* server with invalid packets on the port that we're expecting the
* RADIUS response on.
*/
addrsize = sizeof(remoteaddr);
packetlength = recvfrom(sock, receive_buffer, RADIUS_BUFFER_SIZE, 0,
8 years ago
(struct sockaddr *) &remoteaddr, &addrsize);
if (packetlength < 0)
{
ereport(LOG,
(errmsg("could not read RADIUS response: %m")));
closesocket(sock);
return STATUS_ERROR;
}
if (remoteaddr.sin6_port != pg_hton16(port))
{
ereport(LOG,
(errmsg("RADIUS response from %s was sent from incorrect port: %d",
server, pg_ntoh16(remoteaddr.sin6_port))));
continue;
}
if (packetlength < RADIUS_HEADER_LENGTH)
{
ereport(LOG,
(errmsg("RADIUS response from %s too short: %d", server, packetlength)));
continue;
}
if (packetlength != pg_ntoh16(receivepacket->length))
{
ereport(LOG,
(errmsg("RADIUS response from %s has corrupt length: %d (actual length %d)",
server, pg_ntoh16(receivepacket->length), packetlength)));
continue;
}
if (packet->id != receivepacket->id)
{
ereport(LOG,
(errmsg("RADIUS response from %s is to a different request: %d (should be %d)",
server, receivepacket->id, packet->id)));
continue;
}
/*
* Verify the response authenticator, which is calculated as
* MD5(Code+ID+Length+RequestAuthenticator+Attributes+Secret)
*/
cryptvector = palloc(packetlength + strlen(secret));
memcpy(cryptvector, receivepacket, 4); /* code+id+length */
memcpy(cryptvector + 4, packet->vector, RADIUS_VECTOR_LENGTH); /* request
* authenticator, from
* original packet */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
8 years ago
if (packetlength > RADIUS_HEADER_LENGTH) /* there may be no
* attributes at all */
memcpy(cryptvector + RADIUS_HEADER_LENGTH, receive_buffer + RADIUS_HEADER_LENGTH, packetlength - RADIUS_HEADER_LENGTH);
memcpy(cryptvector + packetlength, secret, strlen(secret));
if (!pg_md5_binary(cryptvector,
packetlength + strlen(secret),
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
encryptedpassword, &errstr))
{
ereport(LOG,
Improve error handling of cryptohash computations The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
4 years ago
(errmsg("could not perform MD5 encryption of received packet: %s",
errstr)));
pfree(cryptvector);
continue;
}
pfree(cryptvector);
if (memcmp(receivepacket->vector, encryptedpassword, RADIUS_VECTOR_LENGTH) != 0)
{
ereport(LOG,
(errmsg("RADIUS response from %s has incorrect MD5 signature",
server)));
continue;
}
if (receivepacket->code == RADIUS_ACCESS_ACCEPT)
{
closesocket(sock);
return STATUS_OK;
}
else if (receivepacket->code == RADIUS_ACCESS_REJECT)
{
closesocket(sock);
return STATUS_EOF;
}
else
{
ereport(LOG,
(errmsg("RADIUS response from %s has invalid code (%d) for user \"%s\"",
server, receivepacket->code, user_name)));
continue;
}
} /* while (true) */
}