Tag Archives: Testing

Load overrides.zcml in plone.app.testing

Today I was working on a project where we use overrides.zcml to easily override some default Plone behavior. All was working fine (in the browser, that is) until I started writing tests for our custom behavior.

First thing I noticed was that the overrides.zcml was not loaded in our test layer. “Doh, I need to load it manually with the loadZCML() method!” I thought to myself. Boy was I wrong :).

The problem with using loadZCML() is that it loads whatever ZCML you tell it to load in a “normal” way. So, obviously, you get conflict errors, since declarations in your overrides.zcml directly map declarations that they override. Hence I needed to find a way to tell plone.app.testing to load my overrides.zcml in an “override” manner and not throw conflict errors.

After quite some research and asking around, I got a tip from JC Brand on the #plone IRC channel: use xmlconfig.includeOverrides(). This indeed got me the exact result I wanted.

Here’s a snippet for my future self and any other plonista struggling with the same problem that happen to stumble upon this blog:

from plone.app.testing import PLONE_FIXTURE
from plone.app.testing import PloneSandboxLayer
from plone.testing import z2
from zope.configuration import xmlconfig


class MyProductLayer(PloneSandboxLayer):

    defaultBases = (PLONE_FIXTURE,)

    def setUpZope(self, app, configurationContext):
        import my.product
        self.loadZCML(package=my.product)
        xmlconfig.includeOverrides(configurationContext, 'overrides.zcml', package=my.product)
        z2.installProduct(app, 'my.product')

Robot on Travis – uploading results to S3

This is a walkthrough of how one could upload to Amazon S3 screenshots and other output files produced by Robot framework ran in a Travis CI build. The reason why we want to do this is to be able to inspect what Robot sees and have more information when a test fails. It’s written with some things specific to Plone development, nevertheless it should still be useful for any other framework/language supported by Travis.

Preparing Amazon S3

  1. Go to http://aws.amazon.com/ and sign-up & login.
  2. Go to http://aws.amazon.com/s3/ and click “Sign Up Now” to enable Amazon S3 for your account.
  3. Go to https://console.aws.amazon.com/s3/home and click “Create Bucket” named “my-travis-builds” or something similar. Travis will upload screenshots inside this bucket.
  4. Go to https://console.aws.amazon.com/iam/home, click “Users” in the left navigation sidebar and then click “Create New Users” button. Enter “travis” as a username and keep the “Generate an access key for each User” checked. This is the user that Travis CI will use to upload files to your Amazon S3 account. When your user is created click the “Download credentials” — we’ll need them later.
  5. Now click on the “travis” user, select the “Permissions” tab and click “Attach User Policy”. Select “Custom Policy” and click “Select”. Enter “travis-upload-only” or similar as the Policy Name and paste the following into the Policy Document field:
    {
      "Statement": [
        {
          "Action": [
            "s3:ListObject",
            "s3:PutObject"
          ],
          "Effect": "Allow",
          "Resource": [
            "arn:aws:s3:::my-travis-builds"
          ]
        },
        {
          "Action": [
            "s3:ListObject",
            "s3:PutObject"
          ],
          "Effect": "Allow",
          "Resource": [
            "arn:aws:s3:::my-travis-builds/*"
          ]
        },
        {
          "Action": [
            "s3:ListAllMyBuckets"
          ],
          "Effect": "Allow",
          "Resource": [
            "arn:aws:s3:::*"
          ]
        }
      ]
    }
    

    This policy gives the “travis” user minimal required permissions to upload files into the “my-travis-builds” bucket with s3cmd. We are now ready to start uploading!

 

Preparing s3cmd

Travis will later use s3cmd to upload files to Amazon S3. Before moving on to configuring Travis, you need to add a “.s3cfg” file to your repository. This file configures s3cmd with access credentials. Open the “credentials.csv” you downloaded earlier when creating a “travis” user through Amazon IAM and paste access and secret keys into “.s3cfg”. Commit & push.

[default]
access_key = INSERT ACCESS KEY FROM CREDENTIALS.CSV
secret_key = INSERT SECRET KEY FROM CREDENTIALS.CSV

 

Configuring Travis CI

I’m assuming you already have “.travis.yml” in your repository and you are already running builds on Travis CI. If this is not the case, check out the following URLs to get you up to speed:

Then, if you haven’t yet, add the following two lines to “before_script” in your .travis.yml file, to enable the virtual X frame buffer, so Robot tests have something to run against.

before_script:
  - "export DISPLAY=:99.0"
  - "sh -e /etc/init.d/xvfb start"

Moving on, you’ll need “s3cmd” installed on your Travis VM, so add the following to your .travis.yml.

install:
  - sudo apt-get install s3cmd

Now, as the last step, add the following line to “after_script” in your .travis.yml file. This uses the s3cmd installed above and the .s3cfg added in the previous section to upload screenshots created by Robot to the “my-travis-builds” bucket on S3, inside the #<travis_job_id> folder.

after_script:
  - s3cmd put --acl-private --guess-mime-type --config=.s3cfg selenium-screenshot* s3://my-travis-builds/#$TRAVIS_JOB_ID/

Now go back to https://console.aws.amazon.com/s3/home and bask in the glory of having Robot test screenshots in your S3 bucket!

Robot screenshots

Testing log output

ATTRIBUTION: This post was inspired by Domen’s Mocking logging module for unittests post back from 2009. Most of the code below is from him, with some added Plone specific bits and pieces.

I was recently debugging an installation of our niteoweb.click2sell and had the need for more verbose error handling. After adding support for it in the code I wondered if and how I should test what was written to the log under different scenarios. The final result of my research looks like this:

  • Create a logger handler that stores all records in a list.
  • Add this handler to the logger on layer setup.
  • Use it in tests to assert log output.
  • Reset its list on test teardown.

First, let’s look at the code for the logger handler, that you would likely put in the same module as where you define your test cases and such (base.py in my case):

class MockedLoggingHandler(logging.Handler):

    debug = []
    warning = []
    info = []
    error = []

    def emit(self, record):
        getattr(self.__class__, record.levelname.lower()).append(record.getMessage())

    @classmethod
    def reset(cls):
        for attr in dir(cls):
            if isinstance(getattr(cls, attr), list):
                setattr(cls, attr, [])

Now, assuming you use plone.app.testing, add the following lines somewhere in your layer setup (setUpPloneSite() in my case):

logger = logging.getLogger('your.package')
logger.addHandler(MockedLoggingHandler())

You are ready to assert log output in your tests:

def test_log_output(self):
    some_method_that_prints_to_log()
    from your.package.tests.base import MockedLoggingHandler as logger
    self.assertEqual(logger.error[0], "Error occured!")

Finally, remember to clear the list of log records in your test teardown:

def tearDown(self):
    logger.reset()

To see this code in a context of a working package, take a peek into niteoweb.click2sell’s tests.

Happy testing!

Testing memoized methods

For a recent Plone project we needed to write unit tests for methods that were using plone.memoize for caching return values.

With a standard Plone’s PloneTestCase I got the following error:

Error in test test_main_image (niteoweb.elcond.tests.test_stol.TestContent)
Traceback (most recent call last):
 File "/Users/zupo/work/python/parts/opt/lib/python2.6/unittest.py", line 279, in run
 testMethod()
 File "/Users/zupo/work/niteoweb.elcond/src/niteoweb/elcond/tests/test_stol.py", line 75, in test_main_image
 image = self.portal.stol.main_image()
 File "/Users/zupo/work/niteoweb.elcond/src/niteoweb/elcond/content/stol.py", line 91, in main_image
 images = self.images(sort_limit=1)
 File "/Users/zupo/.buildout/eggs/plone.memoize-1.1-py2.6.egg/plone/memoize/view.py", line 21, in memogetter
 annotations = IAnnotations(request)
TypeError: ('Could not adapt', None, <InterfaceClass zope.annotation.interfaces.IAnnotations>)

The cause of the problem is that TestRequest (used by PloneTestCase) does not allow IAnnotations adapter to store data in an attribute named __annotations__. The solution is to add the lines below to afterSetUp() method of your TestCase.

from zope.annotation.interfaces import IAttributeAnnotatable
from zope.interface import directlyProvides
from zope.publisher.browser import TestRequest

request = TestRequest()
directlyProvides(request, IAttributeAnnotatable)

Testing session in Plone 3.3

Yesterday, while I was working on LifeInLjubljana.si, I had to test if some data was correctly written to Plone’s session. I couldn’t use the standard ptc.FunctionalTestCase test case as it does not have a session and I would get errors like this:

(Pdb) self.context.session_data_manager
*** AttributeError: session_data_manager
(Pdb) self.context.REQUEST.SESSION
*** AttributeError: SESSION

Following a tip on the Plone-Users mailing list and some trail-and-error I have come up with a solution. Below is my test case with which I can build functional tests for sessions:

class SessionFunctionalTestCase(ptc.FunctionalTestCase):
    """ We use this base class for all the session-related tests in this package. """
    
    def afterSetUp(self):
        # Set up sessioning objects
        ztc.utils.setupCoreSessions(self.app)

    class Session(dict):
        def set(self, key, value):
            self[key] = value

    def _setup(self):
        ptc.FunctionalTestCase._setup(self)
        self.app.REQUEST['SESSION'] = self.Session()