历史服务器 REST API 允许用户获取已完成应用程序的状态。
历史服务器信息资源提供有关历史服务器的总体信息。
以下两个 URI 均可提供历史服务器信息,其中 appid 值标识应用程序 ID。
None
项目 | 数据类型 | 说明 |
---|---|---|
startedOn | long | 历史服务器启动时间(自纪元以来的毫秒数) |
hadoopVersion | string | Hadoop 通用版本 |
hadoopBuildVersion | string | Hadoop 通用构建字符串,包含构建版本、用户和校验和 |
hadoopVersionBuiltOn | string | Hadoop 通用构建时间戳 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/info
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "historyInfo" : { "startedOn":1353512830963, "hadoopVersionBuiltOn" : "Wed Jan 11 21:18:36 UTC 2012", "hadoopBuildVersion" : "0.23.1-SNAPSHOT from 1230253 by user1 source checksum bb6e554c6d50b0397d826081017437a7", "hadoopVersion" : "0.23.1-SNAPSHOT" } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/info Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 330 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <historyInfo> <startedOn>1353512830963</startedOn> <hadoopVersion>0.23.1-SNAPSHOT</hadoopVersion> <hadoopBuildVersion>0.23.1-SNAPSHOT from 1230253 by user1 source checksum bb6e554c6d50b0397d826081017437a7</hadoopBuildVersion> <hadoopVersionBuiltOn>Wed Jan 11 21:18:36 UTC 2012</hadoopVersionBuiltOn> </historyInfo>
以下资源列表适用于 MapReduce。
作业资源提供已完成 MapReduce 作业的列表。目前,它不会返回参数的完整列表
可以指定多个参数。开始时间和结束时间具有开始和结束参数,以便您指定范围。例如,可以请求 2011 年 12 月 19 日上午 1:00 至下午 2:00 之间启动的所有作业,方法是 startedTimeBegin=1324256400&startedTimeEnd=1324303200。如果未指定开始参数,则默认为 0,如果未指定结束参数,则默认为无穷大。
当您请求作业列表时,信息将作为作业对象数组返回。另请参阅 作业 API,了解作业对象的语法。不过,这是完整作业的子集。仅返回 startTime、finishTime、id、name、queue、user、state、mapsTotal、mapsCompleted、reducesTotal 和 reducesCompleted。
项目 | 数据类型 | 说明 |
---|---|---|
作业 | 作业对象数组(json)/零个或多个作业对象(XML) | 作业对象集合 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "jobs" : { "job" : [ { "submitTime" : 1326381344449, "state" : "SUCCEEDED", "user" : "user1", "reducesTotal" : 1, "mapsCompleted" : 1, "startTime" : 1326381344489, "id" : "job_1326381300833_1_1", "name" : "word count", "reducesCompleted" : 1, "mapsTotal" : 1, "queue" : "default", "finishTime" : 1326381356010 }, { "submitTime" : 1326381446500 "state" : "SUCCEEDED", "user" : "user1", "reducesTotal" : 1, "mapsCompleted" : 1, "startTime" : 1326381446529, "id" : "job_1326381300833_2_2", "name" : "Sleep job", "reducesCompleted" : 1, "mapsTotal" : 1, "queue" : "default", "finishTime" : 1326381582106 } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 1922 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <jobs> <job> <submitTime>1326381344449</submitTime> <startTime>1326381344489</startTime> <finishTime>1326381356010</finishTime> <id>job_1326381300833_1_1</id> <name>word count</name> <queue>default</queue> <user>user1</user> <state>SUCCEEDED</state> <mapsTotal>1</mapsTotal> <mapsCompleted>1</mapsCompleted> <reducesTotal>1</reducesTotal> <reducesCompleted>1</reducesCompleted> </job> <job> <submitTime>1326381446500</submitTime> <startTime>1326381446529</startTime> <finishTime>1326381582106</finishTime> <id>job_1326381300833_2_2</id> <name>Sleep job</name> <queue>default</queue> <user>user1</user> <state>SUCCEEDED</state> <mapsTotal>1</mapsTotal> <mapsCompleted>1</mapsCompleted> <reducesTotal>1</reducesTotal> <reducesCompleted>1</reducesCompleted> </job> </jobs>
作业资源包含由作业 ID 标识的特定作业的信息。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 作业 ID |
name | string | 作业名称 |
queue | string | 作业提交到的队列 |
user | string | 用户名 |
state | string | 作业状态 - 有效值:NEW、INITED、RUNNING、SUCCEEDED、FAILED、KILL_WAIT、KILLED、ERROR |
diagnostics | string | 诊断消息 |
submitTime | long | 作业提交时间(自纪元以来的毫秒数) |
startTime | long | 作业开始时间(自纪元以来的毫秒数) |
finishTime | long | 作业完成时间(自纪元以来的毫秒数) |
mapsTotal | int | 映射总数 |
mapsCompleted | int | 已完成的映射数 |
reducesTotal | int | 还原总数 |
reducesCompleted | int | 已完成的还原数 |
uberized | boolean | 指示作业是否是 uber 作业 - 完全在应用程序主控中运行 |
avgMapTime | long | 映射任务的平均时间(毫秒) |
avgReduceTime | long | 还原的平均时间(毫秒) |
avgShuffleTime | long | 洗牌的平均时间(毫秒) |
avgMergeTime | long | 合并的平均时间(毫秒) |
failedReduceAttempts | int | 失败的还原尝试次数 |
killedReduceAttempts | int | 已终止的还原尝试次数 |
successfulReduceAttempts | int | 成功的还原尝试次数 |
failedMapAttempts | int | 失败的映射尝试次数 |
killedMapAttempts | int | 已终止映射尝试数 |
successfulMapAttempts | int | 成功映射尝试数 |
acls | acls 数组(json)/零个或多个 acls 对象(xml) | acls 对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
value | string | acl 值 |
name | string | acl 名称 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2
响应标头
HTTP/1.1 200 OK Content-Type: application/json Server: Jetty(6.1.26) Content-Length: 720
响应正文
{ "job" : { "submitTime": 1326381446500, "avgReduceTime" : 124961, "failedReduceAttempts" : 0, "state" : "SUCCEEDED", "successfulReduceAttempts" : 1, "acls" : [ { "value" : " ", "name" : "mapreduce.job.acl-modify-job" }, { "value" : " ", "name" : "mapreduce.job.acl-view-job" } ], "user" : "user1", "reducesTotal" : 1, "mapsCompleted" : 1, "startTime" : 1326381446529, "id" : "job_1326381300833_2_2", "avgMapTime" : 2638, "successfulMapAttempts" : 1, "name" : "Sleep job", "avgShuffleTime" : 2540, "reducesCompleted" : 1, "diagnostics" : "", "failedMapAttempts" : 0, "avgMergeTime" : 2589, "killedReduceAttempts" : 0, "mapsTotal" : 1, "queue" : "default", "uberized" : false, "killedMapAttempts" : 0, "finishTime" : 1326381582106 } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2 Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 983 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <job> <submitTime>1326381446500</submitTime> <startTime>1326381446529</startTime> <finishTime>1326381582106</finishTime> <id>job_1326381300833_2_2</id> <name>Sleep job</name> <queue>default</queue> <user>user1</user> <state>SUCCEEDED</state> <mapsTotal>1</mapsTotal> <mapsCompleted>1</mapsCompleted> <reducesTotal>1</reducesTotal> <reducesCompleted>1</reducesCompleted> <uberized>false</uberized> <diagnostics/> <avgMapTime>2638</avgMapTime> <avgReduceTime>124961</avgReduceTime> <avgShuffleTime>2540</avgShuffleTime> <avgMergeTime>2589</avgMergeTime> <failedReduceAttempts>0</failedReduceAttempts> <killedReduceAttempts>0</killedReduceAttempts> <successfulReduceAttempts>1</successfulReduceAttempts> <failedMapAttempts>0</failedMapAttempts> <killedMapAttempts>0</killedMapAttempts> <successfulMapAttempts>1</successfulMapAttempts> <acls> <name>mapreduce.job.acl-modify-job</name> <value> </value> </acls> <acls> <name>mapreduce.job.acl-view-job</name> <value> </value> </acls> </job>
使用作业尝试 API,您可以获取表示作业尝试的资源集合。当您对该资源运行 GET 操作时,您将获取作业尝试对象集合。
None
当您请求作业尝试列表时,信息将作为作业尝试对象数组返回。
jobAttempts
项目 | 数据类型 | 说明 |
---|---|---|
jobAttempt | 作业尝试对象数组(JSON)/零个或多个作业尝试对象(XML) | 作业尝试对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
id | int | 作业尝试 ID |
nodeId | string | 尝试运行的节点的节点 ID |
nodeHttpAddress | string | 尝试运行的节点的节点 http 地址 |
logsLink | string | 作业尝试日志的 http 链接 |
containerId | string | 作业尝试的容器 ID |
startTime | long | 尝试的开始时间(自纪元以来的毫秒数) |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/jobattempts
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "jobAttempts" : { "jobAttempt" : [ { "nodeId" : "host.domain.com:8041", "nodeHttpAddress" : "host.domain.com:8042", "startTime" : 1326381444693, "id" : 1, "logsLink" : "http://host.domain.com:19888/jobhistory/logs/host.domain.com:8041/container_1326381300833_0002_01_000001/job_1326381300833_2_2/user1", "containerId" : "container_1326381300833_0002_01_000001" } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/jobattmpts Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 575 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <jobAttempts> <jobAttempt> <nodeHttpAddress>host.domain.com:8042</nodeHttpAddress> <nodeId>host.domain.com:8041</nodeId> <id>1</id> <startTime>1326381444693</startTime> <containerId>container_1326381300833_0002_01_000001</containerId> <logsLink>http://host.domain.com:19888/jobhistory/logs/host.domain.com:8041/container_1326381300833_0002_01_000001/job_1326381300833_2_2/user1</logsLink> </jobAttempt> </jobAttempts>
使用作业计数器 API,您可以获取表示该作业所有计数器的资源集合。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 作业 ID |
counterGroup | counterGroup 对象数组(JSON)/零个或多个 counterGroup 对象(XML) | 计数器组对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
counterGroupName | string | 计数器组名称 |
counter | counter 对象数组(JSON)/零个或多个 counter 对象(XML) | 计数器对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
name | string | 计数器名称 |
reduceCounterValue | long | 归约任务的计数器值 |
mapCounterValue | long | 映射任务的计数器值 |
totalCounterValue | long | 所有任务的计数器值 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/counters
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "jobCounters" : { "id" : "job_1326381300833_2_2", "counterGroup" : [ { "counterGroupName" : "Shuffle Errors", "counter" : [ { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "BAD_ID" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "CONNECTION" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "IO_ERROR" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "WRONG_LENGTH" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "WRONG_MAP" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "WRONG_REDUCE" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.FileSystemCounter", "counter" : [ { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 2483, "name" : "FILE_BYTES_READ" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 108525, "name" : "FILE_BYTES_WRITTEN" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "FILE_READ_OPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "FILE_LARGE_READ_OPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "FILE_WRITE_OPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 48, "name" : "HDFS_BYTES_READ" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "HDFS_BYTES_WRITTEN" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1, "name" : "HDFS_READ_OPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "HDFS_LARGE_READ_OPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "HDFS_WRITE_OPS" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.TaskCounter", "counter" : [ { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1, "name" : "MAP_INPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1200, "name" : "MAP_OUTPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 4800, "name" : "MAP_OUTPUT_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 2235, "name" : "MAP_OUTPUT_MATERIALIZED_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 48, "name" : "SPLIT_RAW_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "COMBINE_INPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "COMBINE_OUTPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1200, "name" : "REDUCE_INPUT_GROUPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 2235, "name" : "REDUCE_SHUFFLE_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1200, "name" : "REDUCE_INPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "REDUCE_OUTPUT_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 2400, "name" : "SPILLED_RECORDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1, "name" : "SHUFFLED_MAPS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "FAILED_SHUFFLE" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1, "name" : "MERGED_MAP_OUTPUTS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 113, "name" : "GC_TIME_MILLIS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 1830, "name" : "CPU_MILLISECONDS" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 478068736, "name" : "PHYSICAL_MEMORY_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 2159284224, "name" : "VIRTUAL_MEMORY_BYTES" }, { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 378863616, "name" : "COMMITTED_HEAP_BYTES" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter", "counter" : [ { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "BYTES_READ" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter", "counter" : [ { "reduceCounterValue" : 0, "mapCounterValue" : 0, "totalCounterValue" : 0, "name" : "BYTES_WRITTEN" } ] } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/counters Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 7030 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <jobCounters> <id>job_1326381300833_2_2</id> <counterGroup> <counterGroupName>Shuffle Errors</counterGroupName> <counter> <name>BAD_ID</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>CONNECTION</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>IO_ERROR</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>WRONG_LENGTH</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>WRONG_MAP</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>WRONG_REDUCE</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> </counterGroup> <counterGroup> <counterGroupName>org.apache.hadoop.mapreduce.FileSystemCounter</counterGroupName> <counter> <name>FILE_BYTES_READ</name> <totalCounterValue>2483</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>FILE_BYTES_WRITTEN</name> <totalCounterValue>108525</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>FILE_READ_OPS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>FILE_LARGE_READ_OPS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>FILE_WRITE_OPS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>HDFS_BYTES_READ</name> <totalCounterValue>48</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>HDFS_BYTES_WRITTEN</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>HDFS_READ_OPS</name> <totalCounterValue>1</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>HDFS_LARGE_READ_OPS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>HDFS_WRITE_OPS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> </counterGroup> <counterGroup> <counterGroupName>org.apache.hadoop.mapreduce.TaskCounter</counterGroupName> <counter> <name>MAP_INPUT_RECORDS</name> <totalCounterValue>1</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>MAP_OUTPUT_RECORDS</name> <totalCounterValue>1200</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>MAP_OUTPUT_BYTES</name> <totalCounterValue>4800</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>MAP_OUTPUT_MATERIALIZED_BYTES</name> <totalCounterValue>2235</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>SPLIT_RAW_BYTES</name> <totalCounterValue>48</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>COMBINE_INPUT_RECORDS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>COMBINE_OUTPUT_RECORDS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>REDUCE_INPUT_GROUPS</name> <totalCounterValue>1200</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>REDUCE_SHUFFLE_BYTES</name> <totalCounterValue>2235</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>REDUCE_INPUT_RECORDS</name> <totalCounterValue>1200</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>REDUCE_OUTPUT_RECORDS</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>SPILLED_RECORDS</name> <totalCounterValue>2400</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>SHUFFLED_MAPS</name> <totalCounterValue>1</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>FAILED_SHUFFLE</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>MERGED_MAP_OUTPUTS</name> <totalCounterValue>1</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>GC_TIME_MILLIS</name> <totalCounterValue>113</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>CPU_MILLISECONDS</name> <totalCounterValue>1830</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>PHYSICAL_MEMORY_BYTES</name> <totalCounterValue>478068736</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>VIRTUAL_MEMORY_BYTES</name> <totalCounterValue>2159284224</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> <counter> <name>COMMITTED_HEAP_BYTES</name> <totalCounterValue>378863616</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> </counterGroup> <counterGroup> <counterGroupName>org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter</counterGroupName> <counter> <name>BYTES_READ</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> </counterGroup> <counterGroup> <counterGroupName>org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter</counterGroupName> <counter> <name>BYTES_WRITTEN</name> <totalCounterValue>0</totalCounterValue> <mapCounterValue>0</mapCounterValue> <reduceCounterValue>0</reduceCounterValue> </counter> </counterGroup> </jobCounters>
作业配置资源包含有关此作业的作业配置的信息。
使用以下 URI 从由 jobid 值标识的作业中获取作业配置信息。
None
项目 | 数据类型 | 说明 |
---|---|---|
path | string | 作业配置文件的路径 |
property | 配置属性数组(JSON)/零个或多个配置属性(XML) | 配置属性对象的集合 |
项目 | 数据类型 | 说明 |
---|---|---|
name | string | 配置属性的名称 |
value | string | 配置属性的值 |
source | string | 此配置对象来自的位置。如果有多个,则显示历史记录,列表末尾为最新来源。 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/conf
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
这是输出的一个小片段,因为输出非常大。实际输出包含作业配置文件中的每个属性。
{ "conf" : { "path" : "hdfs://host.domain.com:9000/user/user1/.staging/job_1326381300833_0002/job.xml", "property" : [ { "value" : "/home/hadoop/hdfs/data", "name" : "dfs.datanode.data.dir" "source" : ["hdfs-site.xml", "job.xml"] }, { "value" : "org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer", "name" : "hadoop.http.filter.initializers" "source" : ["programmatically", "job.xml"] }, { "value" : "/home/hadoop/tmp", "name" : "mapreduce.cluster.temp.dir" "source" : ["mapred-site.xml"] }, ... ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/conf Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 552 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <conf> <path>hdfs://host.domain.com:9000/user/user1/.staging/job_1326381300833_0002/job.xml</path> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hdfs/data</value> <source>hdfs-site.xml</source> <source>job.xml</source> </property> <property> <name>hadoop.http.filter.initializers</name> <value>org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer</value> <source>programmatically</source> <source>job.xml</source> </property> <property> <name>mapreduce.cluster.temp.dir</name> <value>/home/hadoop/tmp</value> <source>mapred-site.xml</source> </property> ... </conf>
使用任务 API,您可以获取表示作业中任务的资源集合。当您对该资源运行 GET 操作时,您将获得任务对象集合。
当您请求任务列表时,信息将作为任务对象数组返回。有关任务对象的语法,另请参见 任务 API。
项目 | 数据类型 | 说明 |
---|---|---|
task | 任务对象数组(JSON)/零个或多个任务对象(XML) | 任务对象集合。 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "tasks" : { "task" : [ { "progress" : 100, "elapsedTime" : 6777, "state" : "SUCCEEDED", "startTime" : 1326381446541, "id" : "task_1326381300833_2_2_m_0", "type" : "MAP", "successfulAttempt" : "attempt_1326381300833_2_2_m_0_0", "finishTime" : 1326381453318 }, { "progress" : 100, "elapsedTime" : 135559, "state" : "SUCCEEDED", "startTime" : 1326381446544, "id" : "task_1326381300833_2_2_r_0", "type" : "REDUCE", "successfulAttempt" : "attempt_1326381300833_2_2_r_0_0", "finishTime" : 1326381582103 } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 653 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <tasks> <task> <startTime>1326381446541</startTime> <finishTime>1326381453318</finishTime> <elapsedTime>6777</elapsedTime> <progress>100.0</progress> <id>task_1326381300833_2_2_m_0</id> <state>SUCCEEDED</state> <type>MAP</type> <successfulAttempt>attempt_1326381300833_2_2_m_0_0</successfulAttempt> </task> <task> <startTime>1326381446544</startTime> <finishTime>1326381582103</finishTime> <elapsedTime>135559</elapsedTime> <progress>100.0</progress> <id>task_1326381300833_2_2_r_0</id> <state>SUCCEEDED</state> <type>REDUCE</type> <successfulAttempt>attempt_1326381300833_2_2_r_0_0</successfulAttempt> </task> </tasks>
任务资源包含有关作业中特定任务的信息。
使用以下 URI 从由 taskid 值标识的任务中获取任务对象。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 任务 ID |
state | string | 任务状态 - 有效值为:NEW、SCHEDULED、RUNNING、SUCCEEDED、FAILED、KILL_WAIT、KILLED |
type | string | 任务类型 - MAP 或 REDUCE |
successfulAttempt | string | 最后一次成功尝试的 ID |
progress | float | 任务进度(百分比) |
startTime | long | 任务开始时间(自纪元以来的毫秒数),如果从未开始,则为 -1 |
finishTime | long | 任务完成时间(自纪元以来的毫秒数) |
elapsedTime | long | 应用程序启动以来的经过时间(以毫秒为单位) |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "task" : { "progress" : 100, "elapsedTime" : 6777, "state" : "SUCCEEDED", "startTime" : 1326381446541, "id" : "task_1326381300833_2_2_m_0", "type" : "MAP", "successfulAttempt" : "attempt_1326381300833_2_2_m_0_0", "finishTime" : 1326381453318 } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0 Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 299 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <task> <startTime>1326381446541</startTime> <finishTime>1326381453318</finishTime> <elapsedTime>6777</elapsedTime> <progress>100.0</progress> <id>task_1326381300833_2_2_m_0</id> <state>SUCCEEDED</state> <type>MAP</type> <successfulAttempt>attempt_1326381300833_2_2_m_0_0</successfulAttempt> </task>
使用任务计数器 API,您可以将资源集合对象化,这些资源集合表示该任务的所有计数器。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 任务 ID |
taskCounterGroup | counterGroup 对象数组(JSON)/零个或多个 counterGroup 对象(XML) | 计数器组对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
counterGroupName | string | 计数器组名称 |
counter | counter 对象数组(JSON)/零个或多个 counter 对象(XML) | 计数器对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
name | string | 计数器名称 |
value | long | 计数器的值 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/counters
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "jobTaskCounters" : { "id" : "task_1326381300833_2_2_m_0", "taskCounterGroup" : [ { "counterGroupName" : "org.apache.hadoop.mapreduce.FileSystemCounter", "counter" : [ { "value" : 2363, "name" : "FILE_BYTES_READ" }, { "value" : 54372, "name" : "FILE_BYTES_WRITTEN" }, { "value" : 0, "name" : "FILE_READ_OPS" }, { "value" : 0, "name" : "FILE_LARGE_READ_OPS" }, { "value" : 0, "name" : "FILE_WRITE_OPS" }, { "value" : 0, "name" : "HDFS_BYTES_READ" }, { "value" : 0, "name" : "HDFS_BYTES_WRITTEN" }, { "value" : 0, "name" : "HDFS_READ_OPS" }, { "value" : 0, "name" : "HDFS_LARGE_READ_OPS" }, { "value" : 0, "name" : "HDFS_WRITE_OPS" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.TaskCounter", "counter" : [ { "value" : 0, "name" : "COMBINE_INPUT_RECORDS" }, { "value" : 0, "name" : "COMBINE_OUTPUT_RECORDS" }, { "value" : 460, "name" : "REDUCE_INPUT_GROUPS" }, { "value" : 2235, "name" : "REDUCE_SHUFFLE_BYTES" }, { "value" : 460, "name" : "REDUCE_INPUT_RECORDS" }, { "value" : 0, "name" : "REDUCE_OUTPUT_RECORDS" }, { "value" : 0, "name" : "SPILLED_RECORDS" }, { "value" : 1, "name" : "SHUFFLED_MAPS" }, { "value" : 0, "name" : "FAILED_SHUFFLE" }, { "value" : 1, "name" : "MERGED_MAP_OUTPUTS" }, { "value" : 26, "name" : "GC_TIME_MILLIS" }, { "value" : 860, "name" : "CPU_MILLISECONDS" }, { "value" : 107839488, "name" : "PHYSICAL_MEMORY_BYTES" }, { "value" : 1123147776, "name" : "VIRTUAL_MEMORY_BYTES" }, { "value" : 57475072, "name" : "COMMITTED_HEAP_BYTES" } ] }, { "counterGroupName" : "Shuffle Errors", "counter" : [ { "value" : 0, "name" : "BAD_ID" }, { "value" : 0, "name" : "CONNECTION" }, { "value" : 0, "name" : "IO_ERROR" }, { "value" : 0, "name" : "WRONG_LENGTH" }, { "value" : 0, "name" : "WRONG_MAP" }, { "value" : 0, "name" : "WRONG_REDUCE" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter", "counter" : [ { "value" : 0, "name" : "BYTES_WRITTEN" } ] } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/counters Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 2660 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <jobTaskCounters> <id>task_1326381300833_2_2_m_0</id> <taskCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.FileSystemCounter</counterGroupName> <counter> <name>FILE_BYTES_READ</name> <value>2363</value> </counter> <counter> <name>FILE_BYTES_WRITTEN</name> <value>54372</value> </counter> <counter> <name>FILE_READ_OPS</name> <value>0</value> </counter> <counter> <name>FILE_LARGE_READ_OPS</name> <value>0</value> </counter> <counter> <name>FILE_WRITE_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_BYTES_READ</name> <value>0</value> </counter> <counter> <name>HDFS_BYTES_WRITTEN</name> <value>0</value> </counter> <counter> <name>HDFS_READ_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_LARGE_READ_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_WRITE_OPS</name> <value>0</value> </counter> </taskCounterGroup> <taskCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.TaskCounter</counterGroupName> <counter> <name>COMBINE_INPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>COMBINE_OUTPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>REDUCE_INPUT_GROUPS</name> <value>460</value> </counter> <counter> <name>REDUCE_SHUFFLE_BYTES</name> <value>2235</value> </counter> <counter> <name>REDUCE_INPUT_RECORDS</name> <value>460</value> </counter> <counter> <name>REDUCE_OUTPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>SPILLED_RECORDS</name> <value>0</value> </counter> <counter> <name>SHUFFLED_MAPS</name> <value>1</value> </counter> <counter> <name>FAILED_SHUFFLE</name> <value>0</value> </counter> <counter> <name>MERGED_MAP_OUTPUTS</name> <value>1</value> </counter> <counter> <name>GC_TIME_MILLIS</name> <value>26</value> </counter> <counter> <name>CPU_MILLISECONDS</name> <value>860</value> </counter> <counter> <name>PHYSICAL_MEMORY_BYTES</name> <value>107839488</value> </counter> <counter> <name>VIRTUAL_MEMORY_BYTES</name> <value>1123147776</value> </counter> <counter> <name>COMMITTED_HEAP_BYTES</name> <value>57475072</value> </counter> </taskCounterGroup> <taskCounterGroup> <counterGroupName>Shuffle Errors</counterGroupName> <counter> <name>BAD_ID</name> <value>0</value> </counter> <counter> <name>CONNECTION</name> <value>0</value> </counter> <counter> <name>IO_ERROR</name> <value>0</value> </counter> <counter> <name>WRONG_LENGTH</name> <value>0</value> </counter> <counter> <name>WRONG_MAP</name> <value>0</value> </counter> <counter> <name>WRONG_REDUCE</name> <value>0</value> </counter> </taskCounterGroup> <taskCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter</counterGroupName> <counter> <name>BYTES_WRITTEN</name> <value>0</value> </counter> </taskCounterGroup> </jobTaskCounters>
使用任务尝试 API,您可以获取资源集合,这些资源集合表示作业中的任务尝试。当您对该资源运行 GET 操作时,您将获取任务尝试对象集合。
None
当您请求任务尝试列表时,信息将作为任务尝试对象数组返回。另请参阅 任务尝试 API,了解任务对象的语法。
项目 | 数据类型 | 说明 |
---|---|---|
taskAttempt | 任务尝试对象数组(JSON)/零个或多个任务尝试对象(XML) | 任务尝试对象集合 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "taskAttempts" : { "taskAttempt" : [ { "assignedContainerId" : "container_1326381300833_0002_01_000002", "progress" : 100, "elapsedTime" : 2638, "state" : "SUCCEEDED", "diagnostics" : "", "rack" : "/98.139.92.0", "nodeHttpAddress" : "host.domain.com:8042", "startTime" : 1326381450680, "id" : "attempt_1326381300833_2_2_m_0_0", "type" : "MAP", "finishTime" : 1326381453318 } ] } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 537 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <taskAttempts> <taskAttempt> <startTime>1326381450680</startTime> <finishTime>1326381453318</finishTime> <elapsedTime>2638</elapsedTime> <progress>100.0</progress> <id>attempt_1326381300833_2_2_m_0_0</id> <rack>/98.139.92.0</rack> <state>SUCCEEDED</state> <nodeHttpAddress>host.domain.com:8042</nodeHttpAddress> <diagnostics/> <type>MAP</type> <assignedContainerId>container_1326381300833_0002_01_000002</assignedContainerId> </taskAttempt> </taskAttempts>
任务尝试资源包含作业中特定任务尝试的信息。
使用以下 URI 从由 attemptid 值标识的任务中获取任务尝试对象。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 任务 ID |
rack | string | 机架 |
state | string | 任务尝试的状态 - 有效值为:NEW、UNASSIGNED、ASSIGNED、RUNNING、COMMIT_PENDING、SUCCESS_CONTAINER_CLEANUP、SUCCEEDED、FAIL_CONTAINER_CLEANUP、FAIL_TASK_CLEANUP、FAILED、KILL_CONTAINER_CLEANUP、KILL_TASK_CLEANUP、KILLED |
type | string | 任务类型 |
assignedContainerId | string | 此尝试分配到的容器 ID |
nodeHttpAddress | string | 此任务尝试运行所在的节点的 HTTP 地址 |
diagnostics | string | 诊断消息 |
progress | float | 任务尝试的进度(以百分比表示) |
startTime | long | 任务尝试开始的时间(以自纪元以来的毫秒数表示) |
finishTime | long | 任务尝试完成的时间(自纪元以来的毫秒数) |
elapsedTime | long | 自任务尝试开始以来经过的时间(毫秒数) |
对于 reduce 任务尝试,您还有以下字段
项目 | 数据类型 | 说明 |
---|---|---|
shuffleFinishTime | long | shuffle 完成的时间(自纪元以来的毫秒数) |
mergeFinishTime | long | merge 完成的时间(自纪元以来的毫秒数) |
elapsedShuffleTime | long | shuffle 阶段完成所需的时间(reduce 任务开始和 shuffle 完成之间的时间,以毫秒为单位) |
elapsedMergeTime | long | merge 阶段完成所需的时间(shuffle 完成和 merge 完成之间的时间,以毫秒为单位) |
elapsedReduceTime | long | reduce 阶段完成所需的时间(merge 完成到 reduce 任务结束之间的时间,以毫秒为单位) |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts/attempt_1326381300833_2_2_m_0_0
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "taskAttempt" : { "assignedContainerId" : "container_1326381300833_0002_01_000002", "progress" : 100, "elapsedTime" : 2638, "state" : "SUCCEEDED", "diagnostics" : "", "rack" : "/98.139.92.0", "nodeHttpAddress" : "host.domain.com:8042", "startTime" : 1326381450680, "id" : "attempt_1326381300833_2_2_m_0_0", "type" : "MAP", "finishTime" : 1326381453318 } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts/attempt_1326381300833_2_2_m_0_0 Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 691 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <taskAttempt> <startTime>1326381450680</startTime> <finishTime>1326381453318</finishTime> <elapsedTime>2638</elapsedTime> <progress>100.0</progress> <id>attempt_1326381300833_2_2_m_0_0</id> <rack>/98.139.92.0</rack> <state>SUCCEEDED</state> <nodeHttpAddress>host.domain.com:8042</nodeHttpAddress> <diagnostics/> <type>MAP</type> <assignedContainerId>container_1326381300833_0002_01_000002</assignedContainerId> </taskAttempt>
使用任务尝试计数器 API,您可以对象一个资源集合,这些资源表示该任务尝试的所有计数器。
None
项目 | 数据类型 | 说明 |
---|---|---|
id | string | 任务尝试 ID |
taskAttemptcounterGroup | 任务尝试计数器组对象数组(JSON)/零个或多个任务尝试计数器组对象(XML) | 任务尝试计数器组对象的集合 |
项目 | 数据类型 | 说明 |
---|---|---|
counterGroupName | string | 计数器组名称 |
counter | counter 对象数组(JSON)/零个或多个 counter 对象(XML) | 计数器对象集合 |
项目 | 数据类型 | 说明 |
---|---|---|
name | string | 计数器名称 |
value | long | 计数器的值 |
JSON 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts/attempt_1326381300833_2_2_m_0_0/counters
响应标头
HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Server: Jetty(6.1.26)
响应正文
{ "jobTaskAttemptCounters" : { "taskAttemptCounterGroup" : [ { "counterGroupName" : "org.apache.hadoop.mapreduce.FileSystemCounter", "counter" : [ { "value" : 2363, "name" : "FILE_BYTES_READ" }, { "value" : 54372, "name" : "FILE_BYTES_WRITTEN" }, { "value" : 0, "name" : "FILE_READ_OPS" }, { "value" : 0, "name" : "FILE_LARGE_READ_OPS" }, { "value" : 0, "name" : "FILE_WRITE_OPS" }, { "value" : 0, "name" : "HDFS_BYTES_READ" }, { "value" : 0, "name" : "HDFS_BYTES_WRITTEN" }, { "value" : 0, "name" : "HDFS_READ_OPS" }, { "value" : 0, "name" : "HDFS_LARGE_READ_OPS" }, { "value" : 0, "name" : "HDFS_WRITE_OPS" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.TaskCounter", "counter" : [ { "value" : 0, "name" : "COMBINE_INPUT_RECORDS" }, { "value" : 0, "name" : "COMBINE_OUTPUT_RECORDS" }, { "value" : 460, "name" : "REDUCE_INPUT_GROUPS" }, { "value" : 2235, "name" : "REDUCE_SHUFFLE_BYTES" }, { "value" : 460, "name" : "REDUCE_INPUT_RECORDS" }, { "value" : 0, "name" : "REDUCE_OUTPUT_RECORDS" }, { "value" : 0, "name" : "SPILLED_RECORDS" }, { "value" : 1, "name" : "SHUFFLED_MAPS" }, { "value" : 0, "name" : "FAILED_SHUFFLE" }, { "value" : 1, "name" : "MERGED_MAP_OUTPUTS" }, { "value" : 26, "name" : "GC_TIME_MILLIS" }, { "value" : 860, "name" : "CPU_MILLISECONDS" }, { "value" : 107839488, "name" : "PHYSICAL_MEMORY_BYTES" }, { "value" : 1123147776, "name" : "VIRTUAL_MEMORY_BYTES" }, { "value" : 57475072, "name" : "COMMITTED_HEAP_BYTES" } ] }, { "counterGroupName" : "Shuffle Errors", "counter" : [ { "value" : 0, "name" : "BAD_ID" }, { "value" : 0, "name" : "CONNECTION" }, { "value" : 0, "name" : "IO_ERROR" }, { "value" : 0, "name" : "WRONG_LENGTH" }, { "value" : 0, "name" : "WRONG_MAP" }, { "value" : 0, "name" : "WRONG_REDUCE" } ] }, { "counterGroupName" : "org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter", "counter" : [ { "value" : 0, "name" : "BYTES_WRITTEN" } ] } ], "id" : "attempt_1326381300833_2_2_m_0_0" } }
XML 响应
HTTP 请求
GET http://history-server-http-address:port/ws/v1/history/mapreduce/jobs/job_1326381300833_2_2/tasks/task_1326381300833_2_2_m_0/attempts/attempt_1326381300833_2_2_m_0_0/counters Accept: application/xml
响应标头
HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 2735 Server: Jetty(6.1.26)
响应正文
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <jobTaskAttemptCounters> <id>attempt_1326381300833_2_2_m_0_0</id> <taskAttemptCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.FileSystemCounter</counterGroupName> <counter> <name>FILE_BYTES_READ</name> <value>2363</value> </counter> <counter> <name>FILE_BYTES_WRITTEN</name> <value>54372</value> </counter> <counter> <name>FILE_READ_OPS</name> <value>0</value> </counter> <counter> <name>FILE_LARGE_READ_OPS</name> <value>0</value> </counter> <counter> <name>FILE_WRITE_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_BYTES_READ</name> <value>0</value> </counter> <counter> <name>HDFS_BYTES_WRITTEN</name> <value>0</value> </counter> <counter> <name>HDFS_READ_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_LARGE_READ_OPS</name> <value>0</value> </counter> <counter> <name>HDFS_WRITE_OPS</name> <value>0</value> </counter> </taskAttemptCounterGroup> <taskAttemptCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.TaskCounter</counterGroupName> <counter> <name>COMBINE_INPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>COMBINE_OUTPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>REDUCE_INPUT_GROUPS</name> <value>460</value> </counter> <counter> <name>REDUCE_SHUFFLE_BYTES</name> <value>2235</value> </counter> <counter> <name>REDUCE_INPUT_RECORDS</name> <value>460</value> </counter> <counter> <name>REDUCE_OUTPUT_RECORDS</name> <value>0</value> </counter> <counter> <name>SPILLED_RECORDS</name> <value>0</value> </counter> <counter> <name>SHUFFLED_MAPS</name> <value>1</value> </counter> <counter> <name>FAILED_SHUFFLE</name> <value>0</value> </counter> <counter> <name>MERGED_MAP_OUTPUTS</name> <value>1</value> </counter> <counter> <name>GC_TIME_MILLIS</name> <value>26</value> </counter> <counter> <name>CPU_MILLISECONDS</name> <value>860</value> </counter> <counter> <name>PHYSICAL_MEMORY_BYTES</name> <value>107839488</value> </counter> <counter> <name>VIRTUAL_MEMORY_BYTES</name> <value>1123147776</value> </counter> <counter> <name>COMMITTED_HEAP_BYTES</name> <value>57475072</value> </counter> </taskAttemptCounterGroup> <taskAttemptCounterGroup> <counterGroupName>Shuffle Errors</counterGroupName> <counter> <name>BAD_ID</name> <value>0</value> </counter> <counter> <name>CONNECTION</name> <value>0</value> </counter> <counter> <name>IO_ERROR</name> <value>0</value> </counter> <counter> <name>WRONG_LENGTH</name> <value>0</value> </counter> <counter> <name>WRONG_MAP</name> <value>0</value> </counter> <counter> <name>WRONG_REDUCE</name> <value>0</value> </counter> </taskAttemptCounterGroup> <taskAttemptCounterGroup> <counterGroupName>org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter</counterGroupName> <counter> <name>BYTES_WRITTEN</name> <value>0</value> </counter> </taskAttemptCounterGroup> </jobTaskAttemptCounters>