When submitting tasks to Cloud Tasks, we will use the built-in OIDC
authentication which runs under the default service account (not the
cloud scheduler service account). We want either to work for app-level
auth.
This does nothing for now, but in the future this will allow us to refer
to the RegistryConfig and/or Service objects from the core project. This
will be necessary when changing CloudTasksUtils to not use the AppEngine
built-in connection (it will need to use a standard HTTP request
instead).
It seems like we're hitting App Engine capacity issues resulting in actual pages
now (for whatever reason, but likely one customer), and we obviously don't want
that.
The value of the column would be set to START_OF_TIME for new entries.
Every time a row is read, the value is updated to the current time. This
allows concurrent reads to not repeatedly read the same entry that has the
earliest request time, because they would only look for rows that have a value
of process time that is before current time - some padding time.
This basically fulfills the same function that LEASE_PADDING gives us
when using a pull queue, whereas a task would be leased for a certain
time, during which time they would not be leased by anyone else.
See: https://cs.opensource.google/nomulus/nomulus/+/master:core/src/main/java/google/registry/dns/ReadDnsQueueAction.java;l=99?q=readdnsqueue&ss=nomulus%2Fnomulus
This PR adds an alternative method to upload Lordn to Nordn server without
using App Engine pull queue. A new database migration stage is added to control
whether a new task is scheduled with the old or new method. The
NordnUploadAction is configured to process both kind of tasks. Once the tasks
scheduled for the old tasks are all processed, we can start using the
new method exclusively.
See: go/registry-pull-queue-redesign
There is no point comparing the old CRL to the new ones when the old one
is invalid. This could happen when the CA cert rotates, after which the
old CRL stop being valid as it fails signature verification against the
new cert.
This change will allow us to keep updating the CRL after a CA rotation without
having to manually delete the old CRL from the database.
See b/270983553.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1946)
<!-- Reviewable:end -->
ICANN doesn't like this character because it's confusable with a normal G (the
stroke tends to get lost in the visual clutter of the descender), and .com's
Extended Latin table doesn't use it either. Best to get rid of it.
For some reason `./gradlew clean build` on master is failing for me on
multiple machines due to a new org.json:json version triggering license
violations, even though the lock files are not changing.
Note that the old versions are still present because if I remove
"The JSON license", which the old versions use, the check also fails...
We might (likely will) modify some of the fiddly bits around this (maybe
the GSON serialization, where we do the actual authorization, etc) but
this should be a decent basic shell structure for endpoints that the new
registrar console can call to retrieve JSON results.
This is only generated and used if "iapClientId" is set in the proxy
config. If so, we use code similar to
https://cloud.google.com/iap/docs/authentication-howto#obtaining_an_oidc_token_for_the_default_service_account
to generate an ID token that is valid for IAP. We set the token on the
Proxy-Authorization header so that we can keep using the pre-existing
access token as well -- IAP allows for us to use either the
Authorization header or the Proxy-Authorization header.
We were under the mistaken impression before that there was a reliable
way to, out-of-band, get a GAIA ID for a particular email address.
Unfortunately, that isn't the case (at least, not in a scalable way or
one that support agents could use). As a result, we have to allow null
GAIA IDs in the database.
When we (or the support team) create new users, we will only specify the
email address and not the GAIA ID. Then, when the user logs in for the
first time, we will have the GAIA ID from the provided ID token, and we
can populate it then.
See b/260945047.
Also refactored the corresponding tests, which should future updates easier.
This change should be deployed at or around 2023-02-15T16:00:00Z.
* Modify DomainCreateFlow to check for an applicable defaultPromoToken
* Add handling for deleted tokens
* Change cache to allocation token cache
* Abstract away cache methods
* Use AllocationToken.getAll in create flow
* Filter out empty tokens
This is an 'easy' upgrade that requires a minor change in
common/build.gradle and the removal of an unnecessary import in buildSrc.
Gradle 7.4 and above has breaking changes that break the latest nebula lint plugin. We may have to wait a while.
Due to the way the beam pipeline is designed, it will expand an
recurring billing event when its event time is in scope for expansion,
instead of billing time. This means that the one time will be generated
45 days earlier. This would negate the need to check if the expansion is
finished when generating monthly invoices.
We will need to backfill the past 45 days of onetimes before the new
code is deployed. As an illustration, with the old code, a cursor time
of 2023-01-17 means that all auto-renewals whose billing time is before
2023-01-17 were created, which corresponds to an effective cursor time
of 2022-12-03 (45 days before 2023-01-17) for event time. This cursor
will need to be brought to 2023-01-17 to ensure that there is no gap in
generated event times when switching to use the new code.
Currently we synthesize a instance id which requires the use of App
Engine Module API. Given that we only have one version of code running
at one time, and HTTP is stateless, there is no point tracking exactly
which GAE "instance" is. We do lose information on which service (default,
backend, etc) is writing the metric, but that does not seem very
important.
Using a constant fake instance ID allows us to get rid of another GAE
dependency.
The temporary queue.xml file is not deleted in the afterEach() method,
likely causing some flaky tests that we saw due to overwriting of the
file by concurrent tests.