Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Importing Wikipedia in Plone
Search
Makina Corpus
October 02, 2013
Technology
1
71
Importing Wikipedia in Plone
Eric BREHAULT – Plone Conference 2013
Makina Corpus
October 02, 2013
Tweet
Share
More Decks by Makina Corpus
See All by Makina Corpus
Publier vos données sur le Web - Forum TIC de l'ATEN 2014
makinacorpus
0
700
Créez votre propre fond de plan à partir de données OSM en utilisant TileMill
makinacorpus
0
120
Team up Django and Web mapping - DjangoCon Europe 2014
makinacorpus
3
830
Petit déjeuner "Les bases de la cartographie sur le Web"
makinacorpus
0
410
Petit déjeuner "Développer sur le cloud, ou comment tout construire à partir de rien" le 11 février - Toulouse
makinacorpus
0
260
CoDe, le programme de développement d'applications mobiles de Makina Corpus
makinacorpus
0
99
Petit déjeuner "Alternatives libres à GoogleMaps" du 11 février 2014 - Nantes - Sylvain Beorchia
makinacorpus
0
650
Petit déjeuner "Les nouveautés de la cartographie en ligne" du 12 décembre
makinacorpus
1
380
Tests carto avec Mocha
makinacorpus
0
800
Other Decks in Technology
See All in Technology
2024年活動報告会(人材育成推進WG・ビジネスサブWG) / 20250114-OIDF-J-EduWG-BizSWG
oidfj
0
230
20250116_自部署内でAmazon Nova体験会をやってみた話
riz3f7
1
100
【JAWS-UG大阪 reInvent reCap LT大会 サンバが始まったら強制終了】“1分”で初めてのソロ参戦reInventを数字で振り返りながら反省する
ttelltte
0
140
AWS re:Invent 2024 re:Cap Taipei (for Developer): New Launches that facilitate Developer Workflow and Continuous Innovation
dwchiang
0
160
JuliaTokaiとJuliaLangJaの紹介 for NGK2025S
antimon2
1
120
20250116_JAWS_Osaka
takuyay0ne
2
200
Visual StudioとかIDE関連小ネタ話
kosmosebi
1
370
コロプラのオンボーディングを採用から語りたい
colopl
5
1.3k
dbtを中心にして組織のアジリティとガバナンスのトレードオンを考えてみた
gappy50
0
260
My small contributions - Fujiwara Tech Conference 2025
ijin
0
1.4k
【Oracle Cloud ウェビナー】2025年のセキュリティ脅威を読み解く:リスクに備えるためのレジリエンスとデータ保護
oracle4engineer
PRO
1
100
DMMブックスへのTipKit導入
ttyi2
1
110
Featured
See All Featured
Into the Great Unknown - MozCon
thekraken
34
1.6k
Fashionably flexible responsive web design (full day workshop)
malarkey
406
66k
Distributed Sagas: A Protocol for Coordinating Microservices
caitiem20
330
21k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
44
9.4k
How to Ace a Technical Interview
jacobian
276
23k
Building an army of robots
kneath
302
45k
Principles of Awesome APIs and How to Build Them.
keavy
126
17k
Keith and Marios Guide to Fast Websites
keithpitt
410
22k
Agile that works and the tools we love
rasmusluckow
328
21k
Fontdeck: Realign not Redesign
paulrobertlloyd
82
5.3k
Become a Pro
speakerdeck
PRO
26
5.1k
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
226
22k
Transcript
Importing Wikipedia in Plone Eric BREHAULT – Plone Conference 2013
ZODB is good at storing objects • Plone contents are
objects, • we store them in the ZODB, • everything is fine, end of the story.
But what if ... ... we want to store non-contentish
records? Like polls, statistics, mail-list subscribers, etc., or any business-specific structured data.
Store them as contents anyway That is a powerfull solution.
But there are 2 major problems...
Problem 1: You need to manage a secondary system •
you need to deploy it, • you need to backup it, • you need to secure it, • etc.
Problem 2: I hate SQL No explanation here.
I think I just cannot digest it...
How to store many records in the ZODB? • Is
the ZODB strong enough? • Is the ZCatalog strong enough?
My grandmother often told me "If you want to become
stronger, you have to eat your soup."
Where do we find a good soup for Plone? In
a super souper!!!
souper.plone and souper • It provides both storage and indexing.
• Record can store any persistent pickable data. • Created by BlueDynamics. • Based on ZODB BTrees, node.ext.zodb, and repoze.catalog.
Add a record >>> soup = get_soup('mysoup', context) >>> record
= Record() >>> record.attrs['user'] = 'user1' >>> record.attrs['text'] = u'foo bar baz' >>> record.attrs['keywords'] = [u'1', u'2', u'ü'] >>> record_id = soup.add(record)
Record in record >>> record['homeaddress'] = Record() >>> record['homeaddress'].attrs['zip'] =
'6020' >>> record['homeaddress'].attrs['town'] = 'Innsbruck' >>> record['homeaddress'].attrs['country'] = 'Austria'
Access record >>> from souper.soup import get_soup >>> soup =
get_soup('mysoup', context) >>> record = soup.get(record_id)
Query >>> from repoze.catalog.query import Eq, Contains >>> [r for
r in soup.query(Eq('user', 'user1') & Contains('text', 'foo'))] [<Record object 'None' at ...>] or using CQE format >>> [r for r in soup.query("user == 'user1' and 'foo' in text")] [<Record object 'None' at ...>]
souper • a Soup-container can be moved to a specific
ZODB mount- point, • it can be shared across multiple independent Plone instances, • souper works on Plone and Pyramid.
Plomino & souper • we use Plomino to build non-content
oriented apps easily, • we use souper to store huge amount of application data.
Plomino data storage Originally, documents (=record) were ATFolder. Capacity about
30 000.
Plomino data storage Since 1.14, documents are pure CMF. Capacity
about 100 000. Usally the Plomino ZCatalog contains a lot of indexes.
Plomino & souper With souper, documents are just soup records.
Capacity: several millions.
Typical use case • Store 500 000 addresses, • Be
able to query them in full text and display the result on a map. Demo
What is the limit? Can we import Wikipedia in souper?
Demo with 400 000 records Demo with 5,5 millions of records
Conclusion • Usage performances are good, • Plone performances are
not impacted. Use it!
Thoughts • What about a REST API on top of
it? • Massive import is long and difficult, could it be improved?
Makina Corpus For all questions related to this talk, please
contact Éric Bréhault
[email protected]
Tel : +33 534 566 958 www.makina-corpus.com