Puts the metric in <project>/tools/commands_called
It counts the use of the tool, with the following labels:
- environment
- tool (nomulus/gtech)
- command called (class name)
- success true/false
- from the shell true/false
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212879670
Defines cron job in crash, sandbox and production environments.
Job already exists in alpha.
Job is not added to qa environment.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212878436
This is obsoleted by the upcoming Registry 3.0 migration, after which we will be
using neither the App Engine Mapreduce library nor Cloud Datastore.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212864845
Updated Reporting (Beam pipeline), Registrar sync to sheets, and Cloud Dns.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212811185
According to RFC 2046, the body of the multipart contains:
multipart-body := [preamble CRLF]
dash-boundary transport-padding CRLF
body-part *encapsulation
close-delimiter transport-padding
[CRLF epilogue]
The preemble and epilogue are optional, and ignored. However, it's not 100%
explicit whether the CRLFs after the preamble and before the epilogue are
required. The one after the preemble is often not given if there's no preemble,
so it's conceivable that you don't *have* to give the CRLF before the epilogue
if there's no epilogue (it's also enclosed in the [], making it part of the
"optional")
However, it seems that when the TMDB "migrated to the cloud" (as they
describe it) on Aug. 13 2018, they started requiring that CRLF.
TESTED=connected to a TMDB-whitelisted server, used CURL to manually create the
message as we currently send it (without the final CRLF) with junk data and got
the error from the bug. Then sent the exact same message with the additional
CRLF, and got a different error that directly relates to the content of the
junk data.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212637246
Scope changes in delegated credentials require coordinated external changes,
therefore should be separate from those used in the application default
credential.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212488389
Updated the registar contact group management, which is the only
use case for this credential.
Also updated GSuite domain delegated admin access config in admin
dashboard for both sandbox (used by alpha and sandbox) and prod.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212320157
Unlike anchor tenants, these domains can be registered for any number of years,
but only during GA, as third parties cannot register domains pre-GA except
through the anchor tenant program.
Since this is new functionality, unlike creation of anchor tenants, there is no
fallback provided to send codes through the domain authcode; they must be sent
using the allocation token extension.
And note that, like with anchor tenants, providing the domain-specific
allocation token overrides any other reserved types that might apply to that
domain.
No changes are necessary to the domain application create flow because of the
above restriction to GA.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212310701
Marksdb changed the testing url to work with their
SSL certificate.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212277787
As part of credential consolidation, update the credential provisioing
in StackDriver Module. This is the only module that will continue using
Json-based credential.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211878151
The vast majority of the time this is the registrar client ID you want, so
there's no reason to require specifying it everything each time. These are
read-only commands anyway, so the potential negative effects are minimal.
See the existing lock/unlock_domain commands for existing occurrences of this
behavior.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211857712
There's no real standard for commented lines in a CSV, but this seems to be the
most well-supported option, so may as well use it.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211847395
As the first step in credential consolidation, we replace
injection of application default credential in for KMS and
Drive.
Tests:
- for Drive, tested with exportDomainLists and exportReservedTerms.
- For KMS, used CLI commands (get_keyring_secret and update_kms_keyring) to change and
restore secret for one key.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211819859
ServerSideCommand now just aggregates CommandWithConnection and
CommandWithRemoteApi, so it's arguably clearer for commands to just implement
both of these.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211670031
This change required several things:
- Separating out the interfaces that merely do HTTP calls to the backend from those
that require the remote API (only load the remote API for the latter). Only the
tools service provides the remote api endpoint.
- Removing the XSRF token as an authentication mechanism (with OAUTH, we no longer
need this, and trying to provide it requires initialization of the datastore
code which requires the remote API)
I can't think of a compelling unit test for this beyond what already exists.
Tested:
Verified that:
- nomulus tool commands (e.g. "list_tlds") work against the tools service as they
currently do
- The "curl" command hits endpoints on "tools" by default.
- We can use --server to specify endpoints on the default service.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=211510454
This adds the terminal step of the Spec11 pipeline- processing the output of
the Beam pipeline to send an e-mail to each registrar informing them of
identified 'bad urls.'
This also factors out methods common between invoicing (which uses similar beam pipeline tools) and spec11 to the common superpackage ReportingModule + ReportingUtils classes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=210932496
This was only supposed to stay commented out until load-testing was complete.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=210087917
Do not allow the user to create TLDs on sandbox that aren't of the form
"*.test.". If real TLDs are created, they will block users from registering
names under that TLD for the nameserver set that we're using for sandbox.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209983482
This should not cause any waste as the pods are only scaled up when necessary.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209881536
We never used it and don't have any plans to use it going forward. All
conceivable parts of its functionality that we might use going forward have
already been subsumed into allocation tokens, which are a simpler way of
handling the same use case that are also standards-compliant.
Also gets rid of the hideous ANCHOR_ prefix on anchor tenant EPP authcodes
that was only ever necessary because of overloading the authcode for
anchor tenant creation. Going forward it'll be based on allocation tokens,
so there's no risk of conflicts.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209418194
This changes the BigQuery input to the fields we ultimately want (fqdn,
registrarName, registrarEmailAddress) and the output to a structured POJO
holding the results from the API. This POJO is then converted to its final text output, i.e.:
Map from registrar e-mail to list of threat-detected subdomains:
{"registrarEmail": "c@fake.com", "threats": [{"url": "a.com", "threatType": "MALWARE"}]}
{"registrarEmail": "d@fake.com", "threats": [{"url": "x.com", "threatType": "MALWARE"}, {"url": "y.com", "threatType": "MALWARE"}]}
This gives us all the data we want in a JSON structured format, to be acted upon downstream by the to-be-constructed PublishSpec11ReportAction. Ideally, we would send an e-mail directly from the beam pipeline, but this is only possible through third-party providers (as opposed to app engine itself).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209416880
For domains (and soon for hosts as well), we output data about the owning registrar. These subrecords wind up being really big if we include all data, because they also list all the registrar contacts. To avoid bloating the RDAP responses, change to output domain response registrar information in summary format, meaning we skip the registrar contacts and events. The requester can still get this information by using the link provided to request the registrar directly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209189993
This makes it easy to debug issues when registrars cannot finish SSL
handshake. There's no privacy concerns because we keep a record of the
registrars' IP address in our whitelist anyway.
The remote address attribute it set by the ProxyProtocolHandler, which runs before anything is done. The GCLP added the protocol header at the beginning of a stream, so we know that by the time handshake is finished (successful or not), this key must be set.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209169683
Note that this gets rid of anchor tenant codes in reserved lists (yay!), which
are no longer valid. They have to come from allocation tokens now.
This removes support for LRP from domain application create flow (that's fine,
we never used it and I'm going to delete all of LRP later). It also uses
allocation tokens from EPP authcodes as a fallback, for now, but that will be
removed later once we switch fully to the allocation token mechanism.
This doesn't yet allow registration of RESERVED_FOR_SPECIFIC_USE domains using
the allocation token extension; that will come in the next CL. Ditto for
showing these reserved domains as available on domain checks when the allocation
token is specified.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209019617
The RelayHandler is installed at the end of a channel pipeline (both frontend and backend). If it does not log the exception, it will be regarded and unhandled exception, which shows up in logs, but does not log the corresponding channel.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208984756
Only connections that have backend are of interest to us. Move the logging
statement accordingly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208898433
The access token renewal on GCE is not what we expected. The metadata server always returns the same token as long as it is valid for 1699 to 3599 seconds and rolls over to the next token on its own schedule. Calling refresh on the GoogleCredential has no effect. We were caching the token for 30 min (1800 seconds), so in a rare case where we "refreshed" the token while its expiry is between 1699 and 1800 seconds, we will cache the token for longer than its validity. [] shorted the caching period to 10 min and added logging, which proved to be working. We no longer need the log any more now that the root cause has been identified. Also changed the cache period to 15 min (900 seconds) which should still be good.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208888170
We confirmed that the retry is working. Instead of logging the messages them
selves, we only need to log the message hash to ensure that the same message is
retried.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208883712
Removing this stanza from the config will cause sandbox to write to production
cloud dns, which is what we want.
Likewise, exclude sandbox in addition to production in the create_cdns_tld
command from the environments that point to staging.
Cloud DNS has 3 environments that we would consider using:
- staging which is reset every week, so we can't use it for sandbox
- testing, which is not accessible from external App Engine
- production
Because of the difficulties with the first two, we've decided to use production.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208834786
There's a very rare error where our access token is denied by GAE which happens a couple of seconds a day (if it happens at all). There doesn't seem to be anything wrong on our side, it could be just that the OAuth server is flaky. But to be safe, the refresh period is shortened. Also added logging to confirm what is refreshed. Note that the logging is at FINE leve, which only actually write to the logs in non-production environment.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208823699
The objects stored in the relay buffer may leak memory when they are no longer used. Alway remember to release their reference count in all cases.
Also save the relay channel and its name in BackendMetricsHandler when the handler is registered. This is because when retrying a relay, the write is sent as soon as the channel is connected, and the channelActive function is not called yet.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208757730
It turns out in the edge case where a write occurs at the same moment that the
relay connection is terminated, the current retry mechanism is not sufficient
because it stores reference coutned objects whose internal buffers are already
freed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208738065
[1] Web whois should redirect to www.registry.google. whois.registry.google also points to the proxy IP, so redirecting to whois.registry.google just makes it loop. Also allow HEAD in web whois request in case that is used in monitoring.
[2] Separately, there's a bug introduced in [] where exception handling of inbound messages is moved to HttpsRelayServiceHandler. However the quota handlers are installed behind the HttpServiceServiceHandler in the channel pipeline, therefore the exception thrown in quota handlers never got processed. This results in hung connection when quota exceeded.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208651011
Tweaked a few logging levels to not spam error level logs. Also make it easy to debug issues in case relay retry fails.
[1] Put non-fatal exceptions that should be logged at warning in their explicit sets. Also always use the root cause to determine if an exception is non-fatal, because sometimes the actual causes are wrapped inside other exceptions.
[2] Record the cause of a relay failure, and record if a relay retry is successful. This way we can look at the log and figure out if a relay is eventually successful.
[3] Add a log when the frontend connection from the client is terminated.
[4] Alway close the relay channel when a relay has failed, which, depend on if the channel is frontend or backend, will reconnect and trigger a retry.
[5] Lastly changed failure test to use assertThrows instead of fail.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208649916
We stopped updating the GCS bucket a while ago. The external repos should be sufficient.
Also added comment to explain dependency shadowing by closure rules.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208234650
16 is consistent with how we've generated codes for anchor tenants in the past.
Also gets rid of a space in the output so that it's a fully valid CSV.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208106631
This adds actual subdomain verification via the SafeBrowsing API to the Spec11
pipeline, as well as on-the-fly KMS decryption via the GenerateSpec11Action to
securely store our API key in source code.
Testing the interaction becomes difficult due to serialization requirements, and will be significantly expanded in the next cl. For now, it verifies basic end-to-end pipeline behavior.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208092942
The previous CL had a bug as non-200 response are outbound errors and are not caught in exceptionCaught() method.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208063877
This seems to fix the FOSS test timeout.
Also use the static-linked netty-tcnative library in tests to ensure that
OpenSSL provider is always available in tests. In production, we should use
the dynamic-linked version to reduce binary footprint and relay on system
OpenSSL library.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208057173
The "tar file encoding" saves the file + metadata (filename and modification) in a "tar" format that is required in the RDE spec, even though it only contains a single file.
This is only relevant for RyDE, and not for Ghostryde. In fact, the only reason Ghostryde exists is to not have the TAR layer.
Currently we only encrypt RyDE, so we only need the TAR encoding. We plan to add decryption ability so we can test files we sent to IronMountain if there's a problem - so we will need TAR decoding for that.
The new file - RydeTar.java - has both encoding and decoding. We keep the format used for all other Input/OutputStreams for consistency, even though in this case it could be a private part of the RyDE encoder / decoder.
This is one of a series of CLs - each merging a single "part" of the encoding.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=208056757
Masks user credentials (tags 'pw' and 'newPW') in EPP XML messages.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=207953894
Previously the ssl initializer tests always uses JDK, which is not really testing what happens in production when we take advantage of the OpenSSL provider. Now the tests will run with all providers that are available (through JUnit parameterization). Some bugs that may cause flakiness are fixed in the process.
Change how SNI is verified in tests. It turns out that the old method (only verifying the SSL parameters in the SSL engine) does not actually ensure that the SNI address is sent to the peer, but only that the SSL engine is configured to send it (this value exists even before a handshake is performed). Also there's likely a bug in Netty's SSL engine that does not set this parameter when created with a peer host.
Lastly HTTP test utils are changed so that they do not use pre-defined constants for header names and values. We want the test to confirm that these constants are what we expect they are. Using string literals makes these tests also more explicit.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=207930282
The design doc is at []
The next step will be to tie this into the domain create flow, and if the domain
name is on a reserved list, allow it to be created if the token is specified that
has the given domain name on it.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=207884521