If by "down" you mean down, then clearly nothing you can do will get an answer from the resource. Unless, of course, it's resource known to have transient errors that may resolve themselves without help from you. In such a case, you might want to implement a reasonable number of retries. An example might be a resource that's temporarily busy handling other requests.
Other approaches you may want to consider include fail over copies of the resources (servers, etc.), and/or monitoring systems which alert you when critical resources go down.
All of this is hypothetical, of course, since we don't know your architecture.
But for that we need to use the respective tools which is quite costly.
This should be a simple calculation:
What are the costs for losing that critical resource (temorary)?
How much will a comercial solution be (eg. Oracle RAC or Failover)
How much will a self coded solution cost?
- how much efford is to be made by your developer(s)?
- how expensive is it to keep the only person knowing that solution in your company (eg for extensions or bugfixes)?
- how much will it cost if this solution does not work when needed?
In the long run cheap solutions tent to be quite expensive...
This isn't a code problem. It is an architecture and design problem.
You START with the assumption that any given system will be down at some time. Then you look into business rules and technical decisions how to mitigate the impact of that.
Keep in mind of course that a system can go down in the middle of a request as well.
A solution that many businesses use is to have two duplicate servers either up all the time (two 'hot' servers) or one server as a stand by (one 'hot' and one 'cold'). In this situation the idea is that if the primary server can not service a request then the secondary server can do so. There are many issues involved with this however.