* updating old tests + collecting more snapshots [WIP] * updates old test suites and collects more visual regressions screenshots * remove snapshots and their collection temporary * run tests from packages.json * update test execution command/export constants from .env to core/constants.js * update tests/puppeteer/README.md file * update LOOP_INTERVAL variable call and assign timeouts to the webcam share spec * redefine waitForSelector func in page.js, update chat test suite specs and add poll chat message test spec * Merge remote-tracking branch 'upstream/develop' into updating-old-tests-visual-with-visual-regressions * update webcam test specs collecting videoPreviewTimeout and use it to wait for videoPreview selector * update custom parameters test suite * update breakout test suite * update webcam layout test suite * update multiusers test suite * update notifications test suite * update presentation test suite * whiteboard test suite * screenshare test suite * update sharednotes test suite * user ELEMENT_WAIT_TIME variable from timeouts constants.js * list TEST CONSTANTS by category * add poll test suite and assigns the right unassigned timeouts * set test pages to headless
3.2 KiB
BigBlueButton Puppeteer Tests
Tests for BigBlueButton using Puppeteer, Chromium and Jest.
Get BBB URL and Secret and configure .env file
To run these tests with an existing BigBlueButton server, make sure you have a server set up, and that you have the server's URL and secret. These will be the same URL and secret you would use to make API calls to the server. If you do not have these, you can find them by running bbb-conf --secret
from the terminal in the server.
Copy the .env-template
file to a new file, and name it .env
. In the .env
file, add your BigBlueButton server URL and secret, so the tests will know which server to connect to.
Setup
To run these tests, you will need the following:
- Ubuntu 16.04 or later
- Node.js 8.11.4 or later
- Docker
These instructions assume you have the BigBlueButton repository cloned into a directory named bigbluebutton
.
First, you need to have the dependencies installed with meteor npm install
, from the bigbluebutton-html5
directory. When Puppeteer installs, it will automatically install the Chromium browser in which the tests will run.
To run individual tests, you can also optionally install Jest globally with sudo npm install jest -g
.
$ cd tests/puppeteer
$ npm install
Running the tests with an existing BigBlueButton server (All in one)
To run all the tests at once, run npm test
.
Running a single test with an existing BigBlueButton server (Specific test)
To run a specific test from bash
:
$ ./tests/puppeteer/run.sh -t testcase
Test cases list: webcamlayout/whiteboard/webcam/virtualizedlist/user/sharednotes/screenshare/presentation/notifications/customparameters/chat/breakout/audio
.
If you have Jest installed globally, you can run individual tests with jest TEST [TEST...]
. The tests are found in the .test.js
files, but you may choose to omit file extensions when running the tests.
Debugging, Metrics and Evidences
Debugging
To debug the tests, you will need to set DEBUG=true
; if DEBUG
receives true
, the logs will show in the console from which we start the tests.
Debugging output will look like below:
console.log
19-Jan-2021 13:03:30 Meeting ID: random-6850458
Getting Metrics
To run the tests and get their metrics, you will need to set BBB_COLLECT_METRICS=true
; if BBB_COLLECT_METRICS
receives true
, the metrics will be generated at the end of the test inside /data/test-date-testName/metrics
folder; for example:
data/test-19-01-2021-pollResultsChatMessage/metrics
.
Getting Evidences
Generating evidences is about to take screenshots of the client during testing. And to realize this, assigning GENERATE_EVIDENCES
in .env
to true
. This will take screenshots and save them in data/test-date-testName/screenshots
; for example: data/test-19-01-2021-pollResultsChatMessage/screenshots
.
Visual Regression
Our test suite includes visual regression specs that can be execute separately with npm run test-visual-regression
(desktop only). It will take screenshots of various views and components of the client, save them inside the tests/puppeteer/__image_snapshots__
folder and put the failed cases into tests/puppeteer/__image_snapshots__/__diff_output__
.