Closed Bug 596417 Opened 14 years ago Closed 14 years ago

Build the new add-on validator page

Categories

(addons.mozilla.org Graveyard :: Admin/Editor Tools, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED
5.12.3

People

(Reporter: jorgev, Assigned: chowse)

References

()

Details

(Whiteboard: [required amo-editors][devtools][Q32010][qa-])

Attachments

(1 file, 2 obsolete files)

The new validator page would feature:

1) A 3-level tree instead of 2 levels. Similar results would be grouped together, and these groups are collapsed by default if there are more than 10 results in them.

2) Sanely-limited results for minified code files. See bug 532789.

3) Better validation for a version with multiple files. See https://preview.addons.mozilla.org/en-US/editors/review/117319 for an example of what is wrong: multiple validation links, but they all go to the same page. All tests are run in parallel, and normally only one of them finishes. This on top of this being multiple copies of the *same* file. Add-ons with different XPI files for different platforms are extremely rare.

It is unclear if this new page should assume integration with the new validator, the old one, or both. It looks like they return very different data, and my review of the new validator is still pending. I'll try to do that before the week is over.

The best place to find test cases is the review queues page: https://preview.addons.mozilla.org/en-US/editors/queue/nominated. Updates and nominations should work in the same way.

You can also use the validation page to upload and test different files: https://preview.addons.mozilla.org/en-US/developers/addon/validate. This is necessary to test for errors since there shouldn't be any files on AMO having errors.
This needs to be mocked up and reviewed.  I'll turn this into a planning bug and assign to chowse.  The actual page will be more than 1 bug when it gets filed.
Assignee: nobody → chowse
Target Milestone: --- → 5.12.2
Attached image Validator Mock-up (obsolete) —
This is awesome!  Leaving open for a couple days for comments
This is going to take some deep work on the validator itself. Right now, the validator doesn't report information on tests that pass. It also doesn't return the content of the line that the error was found on.

On the other hand, the validator does provide lots of information in the JSON output that could be useful for grouping errors by tier, error type, and other cool things. In the validator docs, check out the "messagetree" element in the output.
I thought about this some more: the validator currently doesn't asyncrhonously report its progress on each tier. To enable this, what I'll do is build in some functionality that will allow Celery to pass which tier to test. Then, a task will be started in celery for each tier. I'll try to fit it in as soon as I finish up my first round of bug fixes with the JS stuff.
(In reply to comment #4)
> This is going to take some deep work on the validator itself. Right now, the
> validator doesn't report information on tests that pass. 

Jorge: do you think this is necessary?  The default output for our unit tests only shows detailed info for failing tests, because all the passing tests create a bunch of noise. I think it would be reasonable to only show failures and warnings.

> It also doesn't return
> the content of the line that the error was found on.

I think this is important.

(In reply to comment #5)
> I thought about this some more: the validator currently doesn't asyncrhonously
> report its progress on each tier.

I don't care about this since the validator runs really fast right now.  Doing validation in steps is a low priority unless you think this is really easy.
I don't think it's important to show passes. And if we needed to show passes, the errors and warnings are enough to figure out which tests did pass.
> > It also doesn't return
> > the content of the line that the error was found on.
> 
> I think this is important.

I agree.  We also provide context (the line before and the line after) right now, which I'd rather not lose.
(In reply to comment #8)
> > > It also doesn't return
> > > the content of the line that the error was found on.
> > 
> > I think this is important.
> 
> I agree.  We also provide context (the line before and the line after) right
> now, which I'd rather not lose.

It's on the agenda for upcoming inclusions. The biggest setback at the moment is that the RDF library that we're taking advantage of provides no line/position information at all. This makes returning information about install.rdf really tough.

As for the various other files/data types, it shouldn't be too much of a challenge, but it will take some work to build this functionality in.

It would probably be a good idea to come up with an (at least temporary) solution to bug 532789 before I add this functionality. It wouldn't be wise to build this without thinking ahead. If anyone has ideas, feel free to jump in.
> It's on the agenda for upcoming inclusions. The biggest setback at the moment
> is that the RDF library that we're taking advantage of provides no
> line/position information at all. This makes returning information about
> install.rdf really tough.
> 
> As for the various other files/data types, it shouldn't be too much of a
> challenge, but it will take some work to build this functionality in.

I'm ok returning None for install.rdf line numbers / context for now and filing a bug for a future enhancement.


> It would probably be a good idea to come up with an (at least temporary)
> solution to bug 532789 before I add this functionality. It wouldn't be wise to
> build this without thinking ahead. If anyone has ideas, feel free to jump in.

done!
To summarize feedback/todo:


- No tier depdendent tests on the front end.  We get back a single payload of results.  That's not to say that some tests may not have run, just that on the front end, the results won't be showing up separately.

- Don't show individual passing tests

- We need the images used on this page (unless we can use current ones on the site?)
(In reply to comment #11)
> - Don't show individual passing tests

In that case, I may add a 'every test in this suite passed' block, similar to the 'this suite was not run' block. I'll post this shortly.

> - We need the images used on this page (unless we can use current ones on the
> site?)

There may be some reusable icons (I believe the warning and failure icons are used elsewhere on AMO), but just in case, here are all the icons used in the mocks:

http://people.mozilla.com/~chowse/drop/amo/validator/assets/
Attached image Validator Mock-up (Updated) (obsolete) —
Updated version w/ a 'no errors, no warnings' block added.
Target Milestone: 5.12.2 → 5.12.3
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Last adjustment: removed passing tests from results. If all test in a suite pass, use a special 'passing' block (see bottom of the mock).
Attachment #486971 - Attachment is obsolete: true
Attachment #488219 - Attachment is obsolete: true
Product: addons.mozilla.org → addons.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: