I’ve been writing model specs for a while now and I’m gradually moving towards integration testing. Right now, I’m looking at the best way to test the output of my ActiveAdmin dashboard but I’m unsure which is the best type of spec for the job.
My dashboard contains two tables which summarise yesterday’s and today’s orders. Really, this is a simple case of asserting that the response contains certain content given some existing orders.
So far, the spec types I’ve looked at are:
- controller: I’m not testing the controller (AA already does this).
- view: I’m not testing a view file so this is not relevant.
- feature: doesn’t feel right because I’m not testing user interaction. However, having Capybara available is very useful because I’d like to use matchers such as
has_content
. - request: This seems like the most relevant because I’m testing the result of a single request rather than user interaction. However, RSpec Rails has removed Capybara support for this spec type.
In a “traditional” scenario I’d be able to test the individual components, i.e. view and controller, with their own spec types. But, in this scenario, I’m testing the output of a configurable plugin where I don’t “own” the view-controller part of the stack.
My dilemma is: Is it acceptable to write feature specs which merely test rendered output without testing user interactions? Or should I be writing request specs and foregoing the convenience of Capybara’s matchers? It seems a catch 22 but there must be a good reason for Capybara support being removed from request specs.
Relevant versions:
- Rails 4.1.6
- RSpec 3.1.0
- RSpec Rails 3.1.0
- Capybara 2.4.1
Request specs have fallen very much out of favour, and for good reason.
Feature specs are likely the way to go for testing this type of functionality. There is user interaction – you’re visiting or navigating to a page in your app.
However, if this is all just ActiveAdmin code, why are you testing it at all? Maybe one smoke test just to hit a URL and make sure it doesn’t explode, but I wouldn’t be going into the nitty-gritty of testing AA at all.
1
If the question is whether to test at all, it depends on various factors, including, for example:
- how critical is it that new deployments work without incident (eg. public facing, many users, revenue generating),
- how much customization is in the resource registrations (eg. member and collection actions, has_many/has_one relationships, customized forms, use of decorators, non-trivial access authorization, etc), and
- time and budget available vs. other priorities (eg. early stage startup vs. established business call center).
- expected lifespan and future maintenance needs (eg. will have multiple developers, will have continuously evolving business requirements).
Hope that helps.