Java网络爬虫--HttpClient
warning:
这篇文章距离上次修改已过190天,其中的内容可能已经有所变动。
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class SimpleHttpClient {
public static void main(String[] args) {
CloseableHttpClient httpClient = HttpClients.createDefault();
try {
HttpGet httpGet = new HttpGet("http://www.example.com/");
CloseableHttpResponse response = httpClient.execute(httpGet);
try {
System.out.println("Response Code : " + response.getStatusLine().getStatusCode());
String responseBody = EntityUtils.toString(response.getEntity(), "UTF-8");
System.out.println(responseBody);
} finally {
response.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
这段代码使用了Apache HttpClient库来发送一个HTTP GET请求到指定的URL,并打印出响应状态码和内容。代码中包含了异常处理,确保在发生错误时能够优雅地关闭资源。
评论已关闭