Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Importing Wikipedia in Plone
Search
Makina Corpus
October 02, 2013
Technology
1
76
Importing Wikipedia in Plone
Eric BREHAULT – Plone Conference 2013
Makina Corpus
October 02, 2013
Tweet
Share
More Decks by Makina Corpus
See All by Makina Corpus
Publier vos données sur le Web - Forum TIC de l'ATEN 2014
makinacorpus
0
750
Créez votre propre fond de plan à partir de données OSM en utilisant TileMill
makinacorpus
0
130
Team up Django and Web mapping - DjangoCon Europe 2014
makinacorpus
3
870
Petit déjeuner "Les bases de la cartographie sur le Web"
makinacorpus
0
420
Petit déjeuner "Développer sur le cloud, ou comment tout construire à partir de rien" le 11 février - Toulouse
makinacorpus
0
270
CoDe, le programme de développement d'applications mobiles de Makina Corpus
makinacorpus
0
110
Petit déjeuner "Alternatives libres à GoogleMaps" du 11 février 2014 - Nantes - Sylvain Beorchia
makinacorpus
0
670
Petit déjeuner "Les nouveautés de la cartographie en ligne" du 12 décembre
makinacorpus
1
390
Tests carto avec Mocha
makinacorpus
0
810
Other Decks in Technology
See All in Technology
エラーとアクセシビリティ
schktjm
1
1.2k
AWSで推進するデータマネジメント
kawanago
1
1.3k
DevIO2025_継続的なサービス開発のための技術的意思決定のポイント / how-to-tech-decision-makaing-devio2025
nologyance
1
370
react-callを使ってダイヤログをいろんなとこで再利用しよう!
shinaps
1
230
OCI Oracle Database Services新機能アップデート(2025/06-2025/08)
oracle4engineer
PRO
0
110
未経験者・初心者に贈る!40分でわかるAndroidアプリ開発の今と大事なポイント
operando
5
350
【初心者向け】ローカルLLMの色々な動かし方まとめ
aratako
7
3.4k
2025年夏 コーディングエージェントを統べる者
nwiizo
0
140
Django's GeneratedField by example - DjangoCon US 2025
pauloxnet
0
120
Automating Web Accessibility Testing with AI Agents
maminami373
0
1.2k
Rustから学ぶ 非同期処理の仕組み
skanehira
1
130
Agile PBL at New Grads Trainings
kawaguti
PRO
1
390
Featured
See All Featured
Faster Mobile Websites
deanohume
309
31k
Automating Front-end Workflow
addyosmani
1370
200k
[RailsConf 2023] Rails as a piece of cake
palkan
57
5.8k
Building Applications with DynamoDB
mza
96
6.6k
Side Projects
sachag
455
43k
Bash Introduction
62gerente
615
210k
It's Worth the Effort
3n
187
28k
How to train your dragon (web standard)
notwaldorf
96
6.2k
A Modern Web Designer's Workflow
chriscoyier
696
190k
Easily Structure & Communicate Ideas using Wireframe
afnizarnur
194
16k
Code Reviewing Like a Champion
maltzj
525
40k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
285
13k
Transcript
Importing Wikipedia in Plone Eric BREHAULT – Plone Conference 2013
ZODB is good at storing objects • Plone contents are
objects, • we store them in the ZODB, • everything is fine, end of the story.
But what if ... ... we want to store non-contentish
records? Like polls, statistics, mail-list subscribers, etc., or any business-specific structured data.
Store them as contents anyway That is a powerfull solution.
But there are 2 major problems...
Problem 1: You need to manage a secondary system •
you need to deploy it, • you need to backup it, • you need to secure it, • etc.
Problem 2: I hate SQL No explanation here.
I think I just cannot digest it...
How to store many records in the ZODB? • Is
the ZODB strong enough? • Is the ZCatalog strong enough?
My grandmother often told me "If you want to become
stronger, you have to eat your soup."
Where do we find a good soup for Plone? In
a super souper!!!
souper.plone and souper • It provides both storage and indexing.
• Record can store any persistent pickable data. • Created by BlueDynamics. • Based on ZODB BTrees, node.ext.zodb, and repoze.catalog.
Add a record >>> soup = get_soup('mysoup', context) >>> record
= Record() >>> record.attrs['user'] = 'user1' >>> record.attrs['text'] = u'foo bar baz' >>> record.attrs['keywords'] = [u'1', u'2', u'ü'] >>> record_id = soup.add(record)
Record in record >>> record['homeaddress'] = Record() >>> record['homeaddress'].attrs['zip'] =
'6020' >>> record['homeaddress'].attrs['town'] = 'Innsbruck' >>> record['homeaddress'].attrs['country'] = 'Austria'
Access record >>> from souper.soup import get_soup >>> soup =
get_soup('mysoup', context) >>> record = soup.get(record_id)
Query >>> from repoze.catalog.query import Eq, Contains >>> [r for
r in soup.query(Eq('user', 'user1') & Contains('text', 'foo'))] [<Record object 'None' at ...>] or using CQE format >>> [r for r in soup.query("user == 'user1' and 'foo' in text")] [<Record object 'None' at ...>]
souper • a Soup-container can be moved to a specific
ZODB mount- point, • it can be shared across multiple independent Plone instances, • souper works on Plone and Pyramid.
Plomino & souper • we use Plomino to build non-content
oriented apps easily, • we use souper to store huge amount of application data.
Plomino data storage Originally, documents (=record) were ATFolder. Capacity about
30 000.
Plomino data storage Since 1.14, documents are pure CMF. Capacity
about 100 000. Usally the Plomino ZCatalog contains a lot of indexes.
Plomino & souper With souper, documents are just soup records.
Capacity: several millions.
Typical use case • Store 500 000 addresses, • Be
able to query them in full text and display the result on a map. Demo
What is the limit? Can we import Wikipedia in souper?
Demo with 400 000 records Demo with 5,5 millions of records
Conclusion • Usage performances are good, • Plone performances are
not impacted. Use it!
Thoughts • What about a REST API on top of
it? • Massive import is long and difficult, could it be improved?
Makina Corpus For all questions related to this talk, please
contact Éric Bréhault
[email protected]
Tel : +33 534 566 958 www.makina-corpus.com