Junit Api Documentation

Junit Api Documentation

gopaycommunity/gopay-api-documentation

You can’t perform that action at this time.

113 lines (109 loc) · 4.8 KB

ID R&D provides a simple REST API service for testing the technology.

In addition to the present REST API reference, the IDLive Doc Server Docker container exposes the OpenAPI specification at http://:8080/v3/api-docs. It can be imported in tools like Swagger Editor or Postman. Also the container exposes the Swagger UI at http://:8080/swagger-ui/index.html.

This endpoint expects application/json request body of the following format:

Default equivalent names (default-sr, default-pc, default-ps) can be used instead of the actual pipeline name. See usage example below.

The following validation errors can not be ignored: DOCUMENT_NOT_FOUND, DOCUMENT_PHOTO_NOT_FOUND

API function ‘OK’ response data is always in JSON format.

Response body examples

Interpretation of result

In case if any liveness check result's from the pipeline_results status code is not OK, the aggregate_liveness_probability, and aggregate_image_quality_warnings fields are not present since in that case it's impossible to aggregate liveness check results

Aggregate liveness probability is the main response of the system, and it should always be used to make a liveness check decision. The image is accepted as "live" when probability is greater than 0.5. Probability value range is [0; 1].

Raw liveness score can be used for BPCER / APCER tuning and is provided mostly for calibration purposes. The range of score value is unbound.

We highly advise you to analyze the image quality warnings and reject inappropriate images.

Third-Party Libraries

Using multiple pool slots

Airflow tasks will each occupy a single pool slot by default, but they can be configured to occupy more with the pool_slots argument if required. This is particularly useful when several tasks that belong to the same pool don’t carry the same “computational weight”.

For instance, consider a pool with 2 slots, Pool(pool='maintenance', slots=2), and the following tasks:

Since the heavy task is configured to use 2 pool slots, it depletes the pool when running. Therefore, any of the light tasks must queue and wait for the heavy task to complete before they are executed. Here, in terms of resource usage, the heavy task is equivalent to two light tasks running concurrently.

This implementation can prevent overwhelming system resources, which (in this example) could occur when a heavy and a light task are running concurrently. On the other hand, both light tasks can run concurrently since they only occupy one pool slot each, while the heavy task would have to wait for two pool slots to become available before getting executed.

Pools and SubDAGs do not interact as you might first expect. SubDAGs will not honor any pool you set on them at the top level; pools must be set on the tasks inside the SubDAG directly.

Server slots are logical segments of machine resources (CPU and RAM) loosely reserved for instances of a build executable process. Multiplay Hosting uses the server density to calculate the number of server slots during the machine provisioning process. The exact resources allocated per server slot depend on the server density you define in the fleet. By default, Multiplay Hosting allows build executable processes to exceed the allocated resources of its server slot by a small margin of tolerance. If the process exceeds this margin of tolerance consistently, Multiplay Hosting considers the server as misbehaving.

POST /check_liveness_file

The following validation errors can not be ignored: DOCUMENT_NOT_FOUND, DOCUMENT_PHOTO_NOT_FOUND

This is a convenience endpoint duplicating the /check_liveness one's functionality, but supporting file uploads instead of base64-encoded data. It's mostly intended for quick tests and evaluations conducted using software which doesn't operate well with base64-encoded data, e.g. Postman.

You can specify only pipelines from the list of pipelines of the IDLIVEDOC_SERVER_AVAILABLE_PIPELINES environment variable.

The number of calibrations provided should either be zero or match the number of pipeline names provided.

API function ‘OK’ response data is always in JSON format.

Response body examples

Interpretation of result

In case if any liveness check result's from the pipeline_results status code is not OK, the aggregate_liveness_probability, and aggregate_image_quality_warnings fields are not present since in that case it's impossible to aggregate liveness check results

Aggregate liveness probability is the main response of the system, and it should always be used to make a liveness check decision. The image is accepted as "live" when probability is greater than 0.5. Probability value range is [0; 1].

Raw liveness score can be used for BPCER / APCER tuning and is provided mostly for calibration purposes. The range of score value is unbound.

We highly advise you to analyze the image quality warnings and reject inappropriate images.

API function ‘OK’ response data is always in JSON format. API returns list of features allowed by the present license along with their expiration dates.

Response body examples

API function ‘OK’ response data is always in JSON format. API returns actual server name, version, license expiration date and the value of the IDLIVEDOC_SERVER_AVAILABLE_PIPELINES environment variable.

Response body examples

The method accepts two dates date_from and date_to forming a date range for which it needs to calculate metrics. Both dates are inclusive.

API function ‘OK’ response data is always in JSON format. API returns basic system performance metrics.

Response body examples

get https://pro-api.coingecko.com/api/v3/coins/

This endpoint allows you to query all the coin metadata of a coin (name, price, market cap, logo images, official websites, social media links, project description, public notice information, contract addresses, categories, exchange tickers, and more) on CoinGecko coin page based on a particular coin id.

This schema snippet shows how to create a slot that can contain the tutorial banner and tutorial carousel.

The slotContent is an array of up to 5 banners or carousels. This property is included in the required properties, so when you add a slot created from this schema to an edition, you must add some content for the slot to be valid. See adding the slot to an edition on this page for more details.

You will need to have registered the tutorial banner and tutorial carousel schemas in order to add content those created from these content types to this slot.

If you don't want to use the tutorial banner and tutorial carousel with this slot, you can update the contentTypes array to include your own content types.

An example of creating a slot item using a slot type registered from the tutorial slot schema is shown in the image below. When you create a slot in the Content Library, you'll add some dummy content. In the image below a carousel and a banner have been added.

Adding the slot to an editionLink copied!

When a slot created from this slot type is added to an edition, the slot validation is shown as "Requires content" which means that some content must be added to the slot, otherwise it does not pass slot validation and the edition cannot be scheduled. The slot must contain content because the slotContent property in the example schema is required.

When content has been added, the slot is valid and the edition can be scheduled.

Production Capabilities

Some systems can get overwhelmed when too many processes hit them at the same time. Airflow pools can be used to limit the execution parallelism on arbitrary sets of tasks. The list of pools is managed in the UI (Menu -> Admin -> Pools) by giving the pools a name and assigning it a number of worker slots. There you can also decide whether the pool should include deferred tasks in its calculation of occupied slots.

Tasks can then be associated with one of the existing pools by using the pool parameter when creating tasks:

Tasks will be scheduled as usual while the slots fill up. The number of slots occupied by a task can be configured by pool_slots (see section below). Once capacity is reached, runnable tasks get queued and their state will show as such in the UI. As slots free up, queued tasks start running based on the Priority Weights of the task and its descendants.

Note that if tasks are not given a pool, they are assigned to a default pool default_pool, which is initialized with 128 slots and can be modified through the UI or CLI (but cannot be removed).

Open Source Component Libraries

Creating Your Own Components

Anda mungkin ingin melihat