* Fix sql script name conflict
There are two V11__ files due to concurrent merge. Renamed one
to V12__
Also removed a @NotNull annotation, which is the fist in the code base.
Most of the code base use @Nullable instead. If we do want to use
@NotNull, we may want to use the javax one instead.
* Run cross-release SQL integration tests
Run SQL integration tests across arbitrary schema and server
releases.
Refer to integration/README.md in this change for more information.
TESTED=Cloud build changes tested with cloud-build-local
Used the published jars to test sqlIntegration task locally.
* Add Cloud SQL premium list caches and compare prices with Datastore
Nothing will fail if the prices can't be loaded from Cloud SQL, or if the prices
are different. All that happens is that the error is logged. Then, once this is
running in production for awhile, we'll look at the logs and see if there will
be any pricing implications from switching over to the Cloud SQL version of the
premium lists.
* Add setMaxResults(1) per code review
* Add tests and reorder public functions
* Don't statically import caches
* Improve test pass rate
* Merge branch 'master' into dual-read-premium
* Add PremiumEntry mapping
* Allow update
* Revert column order
* Alphabetize PremiumEntry columns
* Don't bother trying to enforce order
* Private constructor
* Add schema for Cursor
* Add CursorDao and CursorDaoTest
* Fix comment on getTld
* Change tld column to scope
* Fix cursorTime to be converted to DateTime internally and other small fixes
* Add a CursorType enum and a createGlobal constructor for Cursor
* Rename flyway file
* Use cursorType from common/Cursor.java and add null checks
* Verify RegistryTool can instantiate
Add a task that instantiates all command classes in RegistryTool
with runtimeClasspath.
Also make sure that runtimeClasspath is a superset of
compileClasspath.
In some cases, we define JpaTransactionManagerRule in a TestCase
class which is extended by the concrete test class. So, we need
to check if JpaTransactionManagerRule is also defined in the super
class.
Endpoints annotated with AUTH_INTERNAL_ONLY used to be accessible
manually with an internal RPC tool that adds App Engine specific HTTP
headers to a request to make it look like it comes from App Engine
(hence internal). This tool is used by admins to hit such endpoints
during debugging, making them effectively AUTH_INTERNAL_OR_ADMIN.
This RPC tool has never been made available outside Google so the open
source admins do not have such ability. A recent change in the RPC tool
made this hack stop working internally as well. This PR replaces all
all occurances of AUTH_INTERNAL_ONLY with AUTH_INTERNAL_OR_ADMIN and
brings the open source build into feature parity with the internal
version.
Also fixed a few issues the router tests.
* Add a cursor for tracking monthly uploads of the transaction report to ICANN
* Add cursors to track activity, transaction, and manifest report uploads.
* Address comments
* Add @Nullable annotation to manifestCursor
* Add lock and batch load cursors.
* Add string formatting, autovalue CursorInfo object, and handling for null cursors
* Add some helper functions for loadCursors and restructure to require less round trips to the database
* Switch new cursors to be created with cursorTime at first of next month
* Update the Registries cache to leverage/populate the Registry cache
This is accomplished by also providing a loadAll() method on the Registry cache
that can be used to load an entire batch of Registry objects at once.
This improves efficiency, because now, any operation on Registries that loads
all the Registry entities (getTlds(), getTldsOfType(), and getTldEntities()), is
plumbed through the Registry cache, therefore loading it from that cache if it
exists and only hitting the DB if not. If not, this populates the Registry cache
upon loading, so that subsequent calls to Registry.get() will now hit the cache
instead of the DB.
To give a concrete example, the following code:
for (String tld : Registries.getTlds()) {
// ...
doSomethingWith(Registry.get(tld));
// ...
}
is now much more efficient, because the initial call to Registries.getTlds()
populates all the entities in cache, and the subsequent individual calls to
Registry.get(tld) now retrieve them from the cache. Prior to this change,
Registries.getTlds() did not populate the Registry cache, and each subsequent
Registry.get(tld) had the potential to trigger an individual round-trip to the
DB, which is obviously bad for performance.
* Break circular dependency between core and util
Created a new :common project and moved a minimum
number of classes to break the circular dependency
between the two projects. This gets rid of the
gradle lint dependency warnings.
Also separated api classes and testing helpers into
separate source sets in :common so that testing
classes may be restricted to test configurations.
* Allow schema-loading from arbitrary url in tests
Server/Schema compatibility tests must be able to load different versions
of the SQL schema. This change allows test runners to override the
schema location using a system property.
Note: due to dependency-locking, we cannot manipulate the dependencies
closure in the build script to load different schema jars. The jars
must not be on the classpath.
* Add a new method to load all Registry entities of a given type
This is useful for things that need to know more than just the TLD strings
themselves, which is all Registries currently provides.
* Require explict tag when starting psql docker
Defined a util class to return docker tag of desired PSQL version.
Class is defined in ':db' and shared by ':db' and ':core'. Used
an artifact declaration to exclude unnecesary compile dependencies.
Added a presubmit check for instantiations without explicit tag.
* Add by-repo-id method to RegistryLockDao
When we create new registry lock objects on user request, we will want
to make sure that there are no pending lock/unlock actions on this
domain. In order to do that, we will need to know what lock actions
there have been for this domain.
* Inline the query into a single statement
* whoops
* comments
* Refactor UI actions to be more reliable
- Created HtmlAction to handle the nitty-gritty of login and setting up
HTML pages
- Created a test to verify that all UI actions implement JSON GET, JSON
POST, or HTML classes
- Move CSS renaming into a utility class
* Move logging of request into HtmlAction
* Comment and wording in exception
* mention JsonGetAction in the comment
* JsonGetAction extends Runnable
* Run both the AppEngineRule and JpaTransactionManagerRule in tests
As previously written, the AppEngineRule wraps the infinite loop of the
server handling requests. This changes the code so that we'll have a
chain that does both the AppEngine management and the JPA transaction
manager management so we can actually use SQL in tests and in the test
server.
* remove log line
We should communicate to the users why this command failed, that they
are not allowed to use discounted allocation tokens on premium domains.
Currently it still fails, but we don't yet tell them why.
* Add a GET action and tests for registry lock retrieval
* Create isVerified method
* Allow lock access for admins even if they're not enabled on the registrar
* Simple CR responses
* Move locks retrieval to the GET action
* add newline at eof
* Switch to using ID
When we add the extra test entity to the current JpaTransactionManagerRule by calling jpaRule.withEntityClass(TestEntity.class) and jpaRule.withProperty(Environment.HBM2DDL_AUTO, "update"), the rule would override the golden database scheme with the schema derived from the current entity class(This is an expected behavior by enabling HBM2DDL_AUTO). This behavior prevents us from detecting breaking changes in ORM classes.
This PR fixed the issue by adding HibernateSchemaExporter to export the DDL script for given extra entity class, and make JpaTransactionManagerRule execute the DDL script to create the corresponding table for the extra entity when initializing the database. This PR also simplified the code when adding extra entity class for testing. For now, you don't need to(and shouldn't) call jpaRule.withProperty(Environment.HBM2DDL_AUTO, "update").
* Move soyutils_usegoog.js out of node_modules
Everytime the npmInstall runs, it removes this file from node_modules.
Move it outside the folder to prevent this from happening.
* Move karma.conf.js and soyutils_usegooge.js
* Move karma.conf.js to be under core
* Don't wrap exceptions thrown inside Cloud SQL transactions
There's no reason to wrap them in PersistenceException, and it makes dealing
with thrown exceptions harder as seen in the diffs on test classes in this
commit.
* Add a converter for CurrencyUnits stored in the database
This uses the well-known String representation for currency units. It also
provides a base class for other converters that will be persisting the
toString() representation.
* Add DB and formatting changes
* Add tests, make minor fixes
* Don't retry permanent failures when uploading ICANN monthly reports
There are two kinds of permanent failures that this checks for that we know will
never succeed, so it makes no sense to continue retrying 11 more times before
moving onto the next file to upload. These errors are:
1.
com.google.api.client.http.HttpResponseException: 403
Your IP address xx.xx.xx.xx is not allowed to connect
2.
com.google.api.client.http.HttpResponseException: 400
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><response xmlns="urn:ietf:params:xml:ns:iirdea-1.0"><result code="2002"><msg>A report for that month already exists, the cut-off date already passed.</msg><description>Date: 2019-09</description></result></response>
In order to implement this new functionality, this commit also adds a new way to
call Retriable that allows specifying the isRetryable Predicate (which is quite
useful).
* Reenable JpaTransactionManager for Alpha and Crash
* Make UploadClaimsListCommand implement CommandWithCloudSql
* Fix wrong call to get password
* Use Cloud SQL Socket library to provision TransactionManager
* Change to use dataSource configs
* Add another test contact for Registry Lock testing
Previously, we only had two contacts -- one per registrar. This PR adds
a second, registry-lock-enabled, contact to one registrar for two
reasons:
1. For registry-lock-related testing, we'd like to be able to test both
positively and negatively, making sure that the permissions work the way
they should
2. In general, the UI tests should include the case where we have
multiple contacts in the same registrar. Previously, this was never the
case in tests.
* Merge remote-tracking branch 'origin/master' into addTestContact
* Don't destroy existing registry lock passwords in contacts
The existing code assumes that the "contacts" segment of the form
contains an exact representation of the registrar contacts. This breaks
when we have a contact with an existing registry lock password because
we don't want to keep passing around that password in plain text (we
never store it in plain text)
This PR changes the code so that instead of assuming the contact is
provided in its entirety, we load the contact from storage first
(matching by email address) if it exists. We then set the required
fields from the JSON object, and set the password optionally if it was
provided.
Alternatives:
- Create a separate RegistrarContactPassword object with a
RegistrarContact parent. This increases complexity significantly since
we'd be adding a parent-child relationship and adding more objects to
Datastore during the transition to SQL. It also doesn't completely solve
the problem of "When should we set the password?" because the password
field still must be part of the same form.
- Rearrange the UI so that the password is set as part of a completely
separate form with a separate submit action. This would be possible but
is sub-optimal for two reasons. First, we are trying to not re-engineer
the web console as much as possible since we're likely starting it from
scratch before too long anyway. Second, we want the
lock-password-setting to be part of the standard contact modification
workflow.
* Responses to CR
* Actually we need to allow "removal" of fields
* Remove optional
* one-statement building the contacts
We don't want to override toDiffableFieldMap because (per the javadoc)
that is supposed to contain sensitive information. So, we should just
remove it before sending it out.