mirror of
https://github.com/internetee/registry.git
synced 2025-07-29 14:06:21 +02:00
fix: handle HTTPClient::KeepAliveDisconnected in OrgRegistrantPhoneCheckerJob
This commit implements a reliable connection error handling solution for the Company Register API integration. The job previously failed when connection errors occurred without proper recovery mechanisms. The implementation: Adds a lightweight Retryable module with configurable retry logic Implements smart caching of API responses (1 day expiration) Handles common network errors like KeepAliveDisconnected and timeouts Provides a fallback mechanism when all retry attempts fail Ensures test reliability with cache-skipping in test environment Testing: Added specific tests for both recovery and fallback scenarios Verified cache behavior in production and test environments Resolves connection errors observed in production logs without adding unnecessary complexity to the codebase.
This commit is contained in:
parent
832ebff533
commit
a11c0fca2d
4 changed files with 214 additions and 7 deletions
39
app/lib/retryable.rb
Normal file
39
app/lib/retryable.rb
Normal file
|
@ -0,0 +1,39 @@
|
|||
# frozen_string_literal: true
|
||||
|
||||
# Module for retrying operations with external APIs
|
||||
module Retryable
|
||||
# Executes a code block with a specified number of retry attempts in case of specific errors
|
||||
# @param max_attempts [Integer] maximum number of attempts (defaults to 3)
|
||||
# @param retry_delay [Integer] delay between attempts in seconds (defaults to 2)
|
||||
# @param exceptions [Array<Class>] exception classes to catch (defaults to all exceptions)
|
||||
# @param logger [Object] logger object (must support info, warn, error methods)
|
||||
# @param fallback [Proc] code block executed if all attempts fail
|
||||
# @return [Object] result of the block execution or fallback
|
||||
def with_retry(
|
||||
max_attempts: 3,
|
||||
retry_delay: 2,
|
||||
exceptions: [StandardError],
|
||||
logger: Rails.logger,
|
||||
fallback: -> { [] }
|
||||
)
|
||||
attempts = 0
|
||||
|
||||
retry_attempt = lambda do
|
||||
attempts += 1
|
||||
yield
|
||||
rescue *exceptions => e
|
||||
logger.warn("Attempt #{attempts}/#{max_attempts} failed with error: #{e.class} - #{e.message}")
|
||||
|
||||
if attempts < max_attempts
|
||||
logger.info("Retrying in #{retry_delay} seconds...")
|
||||
sleep retry_delay
|
||||
retry_attempt.call
|
||||
else
|
||||
logger.error("All attempts exhausted. Last error: #{e.class} - #{e.message}")
|
||||
fallback.call
|
||||
end
|
||||
end
|
||||
|
||||
retry_attempt.call
|
||||
end
|
||||
end
|
Loading…
Add table
Add a link
Reference in a new issue