The main purpose of this PR is to help debug b/234189023, where a
registrar reported that in sandbox they observed seemingly successful EPP
update responses to delete NS records, which are not actually deleted after
the commands executed.
To actually load the persisted domain resource after an update would
require us to execute another transaction immediately after the update
transaction and that can only be achieved outside the flow (i. e. in
FlowRunner or EppController) and we need to test for the type of flows
before logging, which seems unnecessarily complex.
For now we are just adding logs inside the update transaction itself to
validate that:
1. The NS records to delete are as expected.
2. The Current NS records are as expected.
3. The new NS records to persist are as expected.
The EPP success reply is the default reply when no errors are thrown in
a transaction. If we see a success reply (which means that the
transaction finished successfully) and expected logs from the transaction, the
only explanation could be that somewhere in the ORM layer the java
representation of what the entity is is different from what is being
presented to the database. I think that signals a much bigger and
fundamental problem, which is quite unlikely given how isolated the
issue under consideration is.
In any case we would like to add the logging functionality in sandbox and ask
the registrar to report again when they see similar issues.
Also made some typo and linting fixes.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1663)
<!-- Reviewable:end -->
We're running into issues pulling 2.1.3 from maven, possibly due to
vulnerabilities in dependencies, so this updates it to the most recent
version of 2.2.6.
We have backend max-instances set to 100, which apparently exceeds the default
quota for GAE. Add info on updating the quota or changing this parameter to
the configuration doc.
* Downgrade dependencies that no longer support Java8
Downgrade two dependencies whose latest versions no longer support
java8.
A follow up PR will add java8 compatibility to presubmit tests.
* Use Gradle dependency dynamic versioning
Use dynamic versioning for Gradle dependencies when possible.
Please refer to go/dr-dependency-upgrade for more information about the
automation plan.
This PR calls out all dependencies that must be pinned to specific
versions for various reasons. The remaining ones are converted to
open-ended version ranges ("[version_str,)").
* Check PAK on domain create
* Add unit test
* update docs
* Remove unneccesary setup
* Fix blank line
* Add check and test to all relevant flows
* Change error message
* Downgrade Caffeine to 2.9.3
Apparently Caffeine >=3.* requires Java 11, and we're still stuck on Java 8
because of App Engine Standard. Fortunately this doesn't affect the exposed
interface we're using, so we can simply go back to the newest Caffeine version
once Registry 3.0 Phase 3 (GKE migration) is completed.
* Begin migration from Guava Cache to Caffeine
Caffeine is apparently strictly superior to the older Guava Cache (and is even
recommended in lieu of Guava Cache on Guava Cache's own documentation).
This adds the relevant dependencies and switch over just a single call site to
use the new Caffeine cache. It also implements a new pattern, asynchronously
refreshing the cache value starting from half of our configuration time. For
frequently accessed entities this will allow us to NEVER block on a load, as it
will be asynchronously refreshed in the background long before it ever expires
synchronously during a read operation.
* Build Java8-compatible release
Use the new options.release Gradle property to make sure builds are
compatible with Java 8, which is the runtime on Appengine.
This new property replaces sourceCompatibility, targetCompatibility, and
bootclasspath (wasn't previously set, which is the reason why we
couldn't detect Java9 api usage when building).
* Bump flogger and beam dependency versions
Beam 2.34.0 -> 2.37.0
Flogger 0.7.3 -> 0.7.4
Intellij keeps getting confused about which version of Flogger we're
bringing in. Even though we had previously locked Flogger to 0.7.3, for
some reason it was still bringing in the Beam transitive dependency of
0.6.0 which was causing the a bunch of class initialization errors.
Bumping Beam to 2.34.0 bumps the transitive dependency to 0.7.4 so we
can always use that.
* Add DS validation to match Cloud DNS
* Add checks to flows
* Add some flow tests
* Add tests for DomainCreateFlow
* Add tests for UpdateDomainCommand
* Fix docs test
* Small fixes
* Remove builder from tests
This version of Beam does not have an explicit dependency on log4j.
There are a couple of other things that need to change due to the
upgrade.
1) The new version pulls in a dependency that is not on Maven Central
but on packages.confluent.io, so we need to explicitly add this repo.
2) The new version has a dependency on flogger 0.6 anb above , which removed
the LoggerConfig class (see google/flogger#142).
We therefore backported the class. In the long term we should do what
was suggested in the issue and use the normal JDK Logger config
directly.
3) The intSqlPipeline dependency graph also needs to be updated.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1472)
<!-- Reviewable:end -->
* Add NotLoggedInException tests to flows and flow docs
This wasn't included in flows.md before because the test existed in
ResourceFlowTestCase. So even though the exception could be thrown and
even though this was tested, it wasn't picked up in the documentation
because the documentation is picked up from the corresponding concrete
test class.
We want to keep the read-only-mode-exception as an unchecked exception,
so we introduce a temporary check in the EppController that provides a
specific error message for this situation (rather than letting it fall
through to the generic "command failed" messaging
* Update terraform files and instructions
Update proxy terraform files based on current best practices and allow
exclusion of forwarding rules for HTTP endpoints. Specifically:
- Add a "public_web_whois" input to allow disabling the public HTTP
whois forwarding.
- Add "description" fields to all variables.
- Move outputs of the top-level module into "outputs.tf".
- Auto-reformat using hclfmt.
* Rename client ID to registrar ID in most places
This is a code-only change, that shouldn't require any sort of data
migration. Correspondingly, there are some existing uses of clientId that are
not migrated (e.g. Datastore fields, task queue payloads, URL parameters for
actions that might be hit from task queues, etc.). And it of course doesn't
modify any fields in EPP XML. Note that the Cloud SQL schema fields are
already named using the registar_id pattern.
This also doesn't yet touch on the -c parameters in nomulus tools; that will be
coming later (since that is an external manual touch-point, it will require a
lot more in the way of changes to various meta scripts and documentation).
* Change more client IDs
* Merge branch 'master' into clientid-to-registrarid
* Implement a util class to manage push queues using Cloud Tasks API
Push queues were part of App Engine when they debuted. As a result the
Task Queue API were part of the App Engine SDK and can only be used in
App Engine classic runtime. The new Cloud Tasks API can be used in any
runtime but it only supports push queues. In this PR we implement a util
class (CloudTasksUtils) like TaskQueueUtils to handle enqueuing tasks to
push queues using Cloud Tasks. One action (TldFanoutAction) was
converted to use the new API as a demo. Mass migration of other call sites of
the old API will follow in a separate PR.
TESTED=deployed to alpha and verified that tasks are corrected enqueued
and executed.
The API provided by the GAE SDK will not be available outside GAE
runtime. This presents a problem when we migrate off of GAE. More
pressingly, the RDE pipeline migration to Beam requires that we write to
GCS on GCE. Previously we were able to sidestep the issue by delegating
the writes to FileIO provided by Beam, which knows how to write to GCS.
However the RDE pipeline cannot use FileIO directly as it needs to write
to multiple files in one go and explicit use of GCS API is needed.
An unfortunate side effect of the API migration is that the new testing
library contains a bug which makes serializing GcsUtils impossible. It
is fixed upstream but not released yet. The fix has been backported for
the time being.
<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1228)
<!-- Reviewable:end -->
* Update GCL dependency to avoid security alert
This required a few changes in addition to the dependency update.
- a few transitive / required dependency updates as well
- updating soyutils_usegoog.js and adding checks.js because they're
necessary as part of the Soy compilation process
- Using a trustedResourceUri in the buildSrc Soy compilation instead of
a string
- changing the arguments to the Soy-to-Java compiler to comply with the
new version
- Moving all Soy UI files to be in the registrar directory. This was
not the case before due to previous thinking that we'd have separate
admin and registrar consoles -- this is no longer the case so it's no
longer necessary. This necessitated various refactorings and reference
changes.
- The new soy-to-javascript compiler requires this, as it removes the
"deps" param that we were previously using to say "use the general UI
utils as dependencies for the registrar-console files".
- Creating a SQL environment and loading test data in the test server
main method -- previously, the local test server did not work.
- Fix some JS code that was referencing now-deleted library functions
- Removal of the Karma tests, as the karma-closure library hasn't been
updated since 2018 and it no longer works. We never noticed any errors
from the Karma tests, we never change the JS, and we have the
Java+Selenium screenshot differ tests to test the UI anyway.
* Upgrade testcontainers to work around a race
testcontainers 1.15.? has a race condition that occassionally causes deadlocks.
This can be worked around by upgrading to 1.15.2 and set transport type to
http5.
See https://github.com/testcontainers/testcontainers-java/issues/3531
for more information.
There are two changes that are not lockfiles:
- dependencies.gradle
- java_common.gradle
* Fix some low-hanging code quality issue fruits
These include problems such as: use of raw types, unnecessary throw clauses,
unused variables, and more.
* Add Gradle tasks to stage BEAM pipelines
Add a Gracle task to stage flex-template based pipelines for alpha and
crash environments.
This is a follow up to go/r3pr/1028, which is also under review.
* Update more dependencies to newer versions
* Add lockfiles and back out 2 problematic dep updates
* Fix the build (backs out more changes)
* Back out qdox 2.0 too
* Properly set up JPA in BEAM workers
Sets up a singleton JpaTransactionManger on each worker JVM for all
pipeline nodes to share.
Also added/updated relevant dependencies. The BEAM SDK version change
caused the InitSqlPipeline's graph to change.
* Upgrade error-prone to 3.3.4
This would fix the failure with openjdk 11.0.9 in
3.3.3.
Fixed new antipatterns raised by the new version:
- Replaced unnecessary lambdas with methods.
- Switched wait/sleep calls to equivalent methods using java.time types
- Types inheriting Object.toString() should not be assigned to string
parameter in logging statements.
* Minor python changes
Use dataclasses instead of attrs. The former is part of the standard lib
while the latter may need to be installed separately.
Also added python3 to the list of prerequisites.
* CertificateChecker with checks for expiration and key length
* Add validity length check
* Get rid of hard-coded constants and DSA checks
* add files that for some reason weren't included in last commit
* Rename violations and other fixes
* Add displayMessage to CertificateViolation enum
* Switch violations from an enum to a class
* small changes
* Get rid of ECDSA checks
* add checks for old validity length
* Change error message for validity length
Without it we kept getting the following warning:
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
* Update user-facing documentation
Give our docs a complete overhaul to account for changes in the system,
notably the requirement to configure postgresql.
* Fix dangling sentence.
* Merge branch 'master' into admin-docs
For some inexplicable reasons I have to move the javax.mail package one
spot up to avoid its classes being shadowed by those provided in the
appengine package...
* Update BEAM SDK to work with Java 11
Upgraded BEAM dependencies to 2.23.0.
Updated Spec11 and invoice pipelines:
- Added the required region parameter.
- Removed the workaround code for staging.
Verified that staging is successful in alpha:
./nom_build :core:registryTool --args='-e alpha --sql_access_info "gs://..." deploy_spec11_pipeline --project domain-registry-alpha'
and
./nom_build :core:registryTool --args='-e alpha --sql_access_info "gs://..." deploy_invoicing_pipeline'
* Enable Java 11 features
As of this commit Java 11 must be used to build. The generated bytecode
is still at Java 8 due to App Engine task queue limit.
Also fixed a bug where the included google-java-format jar file is not
used, requiring the user to install it separately.
See: https://cloud.google.com/appengine/docs/standard/java/taskqueue/push
Add the class paths of the source files generated by annotation processors to
the javadoc task's class path so that it doesn't complain about missing
Dagger classes.
Also remove empty <p> tags in all generated source files, because jaxb
genrerates files in multiple locations.
Lastly, for unkown reasons when the source level is set to > 8, the core
subproject throws a warning about a Gradle internal annotator processor
that only supports up to Java 8 and cause the Java compliation to fail because
we set -Werror on all java compliation tasks. I don't think there is a
strong reason that we set -Werror anyway, so this commit removes it.
* Get rid of all remaining JUnit 4 usages except in prober & proxy subprojects
Caveat: Test suites aren't yet implemented in JUnit 5 so we still use the ones
from JUnit 5 in the core subproject.
* Fix some build errors
* Migrate the documentation package to Java 11
The old Doclet API is deprected and removed in Java 12. This commit
changes the documentation package to use the new recommended API.
However it is not a drop-in replacement and there are non-idiomatic
usages all over the place. I think it is eaiser to keep the current code
logic and kind of shoehorn in the new API than starting afresh as the
return on investment of a do-over is not great.
Also note that the docs package is disabled as of this commit because we
are still using Java 8 to compile which lacks the new API. Once we
switch our toolchains to Java 11 (but still compiling Java 8 bytecode)
we can re-enable this package.
TESTED=ran `./gradlew :docs:test` locally with the documentation package
enabled.
This makes it easier to later migrate the package to Java 11. If we move
and migrate in a single PR, because of the portion of the contents that
s changed, git will have trouble recognizing that some files are
renamed *and* modified and treat them as distinct files, making code
review difficult.