About one year passed since Digdag was introduced. How did batch operation change during that period? How did the management consciousness change? What was going to be seen? What is troubled at present? Plugin We announce you made.
SessionResource GET /api/sessions List sessions from recent to old GET /api/sessions/{id} Get a session by id GET /api/sessions/{id}/attempts List attempts of a session AdminResource GET /api/admin/attempts/{id}/userinfo AttemptResource GET /api/attempts list attempts from recent to old GET /api/attempts?include_retried=1 list attempts from recent to old GET /api/attempts?project=<name> list attempts that belong to a particular project GET /api/attempts?project=<name>&workflow=<name> list attempts that belong to a particular workflow GET /api/attempts/{id} show a session GET /api/attempts/{id}/tasks list tasks of a session GET /api/attempts/{id}/retries list retried attempts of this session PUT /api/attempts starts a new session POST /api/attempts/{id}/kill kill a session
redshift.last_results parameter. Default: false. 4 Setting first stores the first row to the parameter as an object (e.g. ${redshift.last_results.count}). 4 Setting all stores all rows to the parameter as an array of objects (e.g. $ {redshift.last_results[0].name}). If number of rows exceeds limit, task fails.
// TODO store_last_results should be io.digdag.standards.operator.jdbc.StoreLastResultsOption // instead of boolean to be consistent with pg> and redshift> operators but not implemented yet. this.storeLastResults = params.get("store_last_results", boolean.class, false);